Jan 23 08:14:52 crc systemd[1]: Starting Kubernetes Kubelet... Jan 23 08:14:52 crc restorecon[4772]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 08:14:52 crc restorecon[4772]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 08:14:53 crc restorecon[4772]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 08:14:53 crc restorecon[4772]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 23 08:14:53 crc kubenswrapper[4860]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 08:14:53 crc kubenswrapper[4860]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 23 08:14:53 crc kubenswrapper[4860]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 08:14:53 crc kubenswrapper[4860]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 08:14:53 crc kubenswrapper[4860]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 23 08:14:53 crc kubenswrapper[4860]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.488057 4860 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490306 4860 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490320 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490324 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490328 4860 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490332 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490335 4860 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490339 4860 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490342 4860 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490346 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490349 4860 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490353 4860 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490356 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490360 4860 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490363 4860 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490367 4860 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490370 4860 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490378 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490382 4860 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490385 4860 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490388 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490392 4860 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490395 4860 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490399 4860 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490402 4860 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490406 4860 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490409 4860 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490412 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490416 4860 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490419 4860 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490423 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490426 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490429 4860 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490433 4860 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490436 4860 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490440 4860 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490446 4860 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490451 4860 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490455 4860 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490459 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490462 4860 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490466 4860 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490469 4860 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490473 4860 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490477 4860 feature_gate.go:330] unrecognized feature gate: Example Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490480 4860 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490484 4860 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490487 4860 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490490 4860 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490494 4860 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490498 4860 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490501 4860 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490506 4860 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490511 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490515 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490518 4860 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490522 4860 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490526 4860 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490532 4860 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490537 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490542 4860 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490546 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490550 4860 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490554 4860 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490557 4860 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490561 4860 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490564 4860 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490568 4860 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490572 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490576 4860 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490580 4860 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.490584 4860 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490652 4860 flags.go:64] FLAG: --address="0.0.0.0" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490660 4860 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490669 4860 flags.go:64] FLAG: --anonymous-auth="true" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490674 4860 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490679 4860 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490683 4860 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490688 4860 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490694 4860 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490698 4860 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490702 4860 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490706 4860 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490711 4860 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490715 4860 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490720 4860 flags.go:64] FLAG: --cgroup-root="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490724 4860 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490728 4860 flags.go:64] FLAG: --client-ca-file="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490731 4860 flags.go:64] FLAG: --cloud-config="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490735 4860 flags.go:64] FLAG: --cloud-provider="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490740 4860 flags.go:64] FLAG: --cluster-dns="[]" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490746 4860 flags.go:64] FLAG: --cluster-domain="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490750 4860 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490754 4860 flags.go:64] FLAG: --config-dir="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490758 4860 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490763 4860 flags.go:64] FLAG: --container-log-max-files="5" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490768 4860 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490772 4860 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490776 4860 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490781 4860 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490785 4860 flags.go:64] FLAG: --contention-profiling="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490789 4860 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490793 4860 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490797 4860 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490802 4860 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490807 4860 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490811 4860 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490815 4860 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490819 4860 flags.go:64] FLAG: --enable-load-reader="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490823 4860 flags.go:64] FLAG: --enable-server="true" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490827 4860 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490832 4860 flags.go:64] FLAG: --event-burst="100" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490836 4860 flags.go:64] FLAG: --event-qps="50" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490841 4860 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490846 4860 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490850 4860 flags.go:64] FLAG: --eviction-hard="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490855 4860 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490859 4860 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490863 4860 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490867 4860 flags.go:64] FLAG: --eviction-soft="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490871 4860 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490875 4860 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490880 4860 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490884 4860 flags.go:64] FLAG: --experimental-mounter-path="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490887 4860 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490891 4860 flags.go:64] FLAG: --fail-swap-on="true" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490895 4860 flags.go:64] FLAG: --feature-gates="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490900 4860 flags.go:64] FLAG: --file-check-frequency="20s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490904 4860 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490909 4860 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490913 4860 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490917 4860 flags.go:64] FLAG: --healthz-port="10248" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490921 4860 flags.go:64] FLAG: --help="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490925 4860 flags.go:64] FLAG: --hostname-override="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490928 4860 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490933 4860 flags.go:64] FLAG: --http-check-frequency="20s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490936 4860 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490940 4860 flags.go:64] FLAG: --image-credential-provider-config="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490945 4860 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490949 4860 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490953 4860 flags.go:64] FLAG: --image-service-endpoint="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490957 4860 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490961 4860 flags.go:64] FLAG: --kube-api-burst="100" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490965 4860 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490969 4860 flags.go:64] FLAG: --kube-api-qps="50" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490973 4860 flags.go:64] FLAG: --kube-reserved="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490977 4860 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490981 4860 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490985 4860 flags.go:64] FLAG: --kubelet-cgroups="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490989 4860 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490993 4860 flags.go:64] FLAG: --lock-file="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.490997 4860 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491001 4860 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491006 4860 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491027 4860 flags.go:64] FLAG: --log-json-split-stream="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491031 4860 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491035 4860 flags.go:64] FLAG: --log-text-split-stream="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491039 4860 flags.go:64] FLAG: --logging-format="text" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491043 4860 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491047 4860 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491051 4860 flags.go:64] FLAG: --manifest-url="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491055 4860 flags.go:64] FLAG: --manifest-url-header="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491061 4860 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491065 4860 flags.go:64] FLAG: --max-open-files="1000000" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491071 4860 flags.go:64] FLAG: --max-pods="110" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491174 4860 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491178 4860 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491182 4860 flags.go:64] FLAG: --memory-manager-policy="None" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491186 4860 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491190 4860 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491194 4860 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491198 4860 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491210 4860 flags.go:64] FLAG: --node-status-max-images="50" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491214 4860 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491218 4860 flags.go:64] FLAG: --oom-score-adj="-999" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491223 4860 flags.go:64] FLAG: --pod-cidr="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491229 4860 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491237 4860 flags.go:64] FLAG: --pod-manifest-path="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491242 4860 flags.go:64] FLAG: --pod-max-pids="-1" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491247 4860 flags.go:64] FLAG: --pods-per-core="0" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491252 4860 flags.go:64] FLAG: --port="10250" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491258 4860 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491262 4860 flags.go:64] FLAG: --provider-id="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491267 4860 flags.go:64] FLAG: --qos-reserved="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491272 4860 flags.go:64] FLAG: --read-only-port="10255" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491277 4860 flags.go:64] FLAG: --register-node="true" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491282 4860 flags.go:64] FLAG: --register-schedulable="true" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491287 4860 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491296 4860 flags.go:64] FLAG: --registry-burst="10" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491300 4860 flags.go:64] FLAG: --registry-qps="5" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491305 4860 flags.go:64] FLAG: --reserved-cpus="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491310 4860 flags.go:64] FLAG: --reserved-memory="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491316 4860 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491321 4860 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491326 4860 flags.go:64] FLAG: --rotate-certificates="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491331 4860 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491336 4860 flags.go:64] FLAG: --runonce="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491340 4860 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491344 4860 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491349 4860 flags.go:64] FLAG: --seccomp-default="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491353 4860 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491357 4860 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491361 4860 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491365 4860 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491370 4860 flags.go:64] FLAG: --storage-driver-password="root" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491374 4860 flags.go:64] FLAG: --storage-driver-secure="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491378 4860 flags.go:64] FLAG: --storage-driver-table="stats" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491381 4860 flags.go:64] FLAG: --storage-driver-user="root" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491385 4860 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491389 4860 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491394 4860 flags.go:64] FLAG: --system-cgroups="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491398 4860 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491414 4860 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491418 4860 flags.go:64] FLAG: --tls-cert-file="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491422 4860 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491427 4860 flags.go:64] FLAG: --tls-min-version="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491431 4860 flags.go:64] FLAG: --tls-private-key-file="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491435 4860 flags.go:64] FLAG: --topology-manager-policy="none" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491439 4860 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491443 4860 flags.go:64] FLAG: --topology-manager-scope="container" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491447 4860 flags.go:64] FLAG: --v="2" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491453 4860 flags.go:64] FLAG: --version="false" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491458 4860 flags.go:64] FLAG: --vmodule="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491463 4860 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491467 4860 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491583 4860 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491589 4860 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491592 4860 feature_gate.go:330] unrecognized feature gate: Example Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491597 4860 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491600 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491604 4860 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491608 4860 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491612 4860 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491615 4860 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491619 4860 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491622 4860 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491626 4860 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491629 4860 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491633 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491636 4860 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491640 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491643 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491647 4860 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491651 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491654 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491658 4860 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491661 4860 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491665 4860 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491669 4860 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491672 4860 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491676 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491680 4860 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491684 4860 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491687 4860 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491691 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491694 4860 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491698 4860 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491702 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491705 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491709 4860 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491713 4860 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491717 4860 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491721 4860 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491726 4860 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491730 4860 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491734 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491738 4860 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491742 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491745 4860 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491749 4860 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491752 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491756 4860 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491759 4860 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491763 4860 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491767 4860 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491772 4860 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491776 4860 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491779 4860 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491784 4860 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491787 4860 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491791 4860 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491794 4860 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491799 4860 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491803 4860 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491807 4860 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491811 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491814 4860 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491818 4860 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491821 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491825 4860 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491828 4860 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491832 4860 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491835 4860 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491840 4860 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491844 4860 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.491848 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.491854 4860 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.502634 4860 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.502689 4860 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.502822 4860 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.502847 4860 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.502862 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.502874 4860 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.502884 4860 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.502896 4860 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.502906 4860 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.502916 4860 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.502927 4860 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.502936 4860 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.502946 4860 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.502956 4860 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.502964 4860 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.502972 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.502980 4860 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.502991 4860 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503000 4860 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503009 4860 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503054 4860 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503063 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503072 4860 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503084 4860 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503094 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503140 4860 feature_gate.go:330] unrecognized feature gate: Example Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503148 4860 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503157 4860 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503165 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503173 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503183 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503195 4860 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503205 4860 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503215 4860 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503223 4860 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503425 4860 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503433 4860 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503440 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503448 4860 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503455 4860 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503463 4860 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503471 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503478 4860 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503486 4860 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503494 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503502 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503509 4860 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503517 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503528 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503536 4860 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503544 4860 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503552 4860 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503560 4860 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503567 4860 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503575 4860 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503583 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503592 4860 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503602 4860 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503611 4860 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503623 4860 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503632 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503643 4860 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503652 4860 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503660 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503668 4860 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503675 4860 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503683 4860 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503691 4860 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503699 4860 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503707 4860 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503714 4860 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503722 4860 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503729 4860 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.503743 4860 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503967 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503979 4860 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503988 4860 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.503998 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504008 4860 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504045 4860 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504056 4860 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504067 4860 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504086 4860 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504102 4860 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504112 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504122 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504131 4860 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504140 4860 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504148 4860 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504157 4860 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504168 4860 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504181 4860 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504191 4860 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504202 4860 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504211 4860 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504364 4860 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504388 4860 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504398 4860 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504408 4860 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504417 4860 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504427 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504437 4860 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504447 4860 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504457 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504467 4860 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504480 4860 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504491 4860 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504502 4860 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504515 4860 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504526 4860 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504537 4860 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504547 4860 feature_gate.go:330] unrecognized feature gate: Example Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504569 4860 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504584 4860 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504596 4860 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504605 4860 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504614 4860 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504624 4860 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504632 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504640 4860 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504649 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504657 4860 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504665 4860 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504673 4860 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504681 4860 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504689 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504698 4860 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504706 4860 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504713 4860 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504724 4860 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504735 4860 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504747 4860 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504757 4860 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504765 4860 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504773 4860 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504781 4860 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504790 4860 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504799 4860 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504819 4860 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504833 4860 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504844 4860 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504854 4860 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504865 4860 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504875 4860 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.504885 4860 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.504902 4860 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.505539 4860 server.go:940] "Client rotation is on, will bootstrap in background" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.512634 4860 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.512803 4860 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.513849 4860 server.go:997] "Starting client certificate rotation" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.513903 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.514536 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-06 06:21:50.915176294 +0000 UTC Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.514620 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.523444 4860 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 08:14:53 crc kubenswrapper[4860]: E0123 08:14:53.524434 4860 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.70:6443: connect: connection refused" logger="UnhandledError" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.526030 4860 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.534979 4860 log.go:25] "Validated CRI v1 runtime API" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.551481 4860 log.go:25] "Validated CRI v1 image API" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.553695 4860 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.556184 4860 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-23-08-10-17-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.556228 4860 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.584380 4860 manager.go:217] Machine: {Timestamp:2026-01-23 08:14:53.580901261 +0000 UTC m=+0.208951536 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a3ff46c5-8531-497b-aead-a749b27af7c5 BootID:b04b77da-1d86-4a43-bc9b-6761c86be5d2 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:65:7d:99 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:65:7d:99 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:c5:15:14 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:20:5a:b5 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d7:8d:90 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:9e:30:e8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:56:4a:26:94:9e:49 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:16:49:cf:ed:1a:32 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.584705 4860 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.585009 4860 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.585644 4860 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.585851 4860 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.585886 4860 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.586253 4860 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.586267 4860 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.586473 4860 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.586518 4860 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.586824 4860 state_mem.go:36] "Initialized new in-memory state store" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.586921 4860 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.587622 4860 kubelet.go:418] "Attempting to sync node with API server" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.587645 4860 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.587670 4860 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.587684 4860 kubelet.go:324] "Adding apiserver pod source" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.587697 4860 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.589628 4860 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.590195 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.70:6443: connect: connection refused Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.590256 4860 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 23 08:14:53 crc kubenswrapper[4860]: E0123 08:14:53.590297 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.70:6443: connect: connection refused" logger="UnhandledError" Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.590660 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.70:6443: connect: connection refused Jan 23 08:14:53 crc kubenswrapper[4860]: E0123 08:14:53.590783 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.70:6443: connect: connection refused" logger="UnhandledError" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.591792 4860 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.592614 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.592669 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.592692 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.592711 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.592741 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.592760 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.592778 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.592808 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.592825 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.592844 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.592864 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.592879 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.593563 4860 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.594276 4860 server.go:1280] "Started kubelet" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.594692 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.70:6443: connect: connection refused Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.594748 4860 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.594885 4860 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.595516 4860 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 08:14:53 crc systemd[1]: Started Kubernetes Kubelet. Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.597710 4860 server.go:460] "Adding debug handlers to kubelet server" Jan 23 08:14:53 crc kubenswrapper[4860]: E0123 08:14:53.597258 4860 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.70:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d4e17fee6de6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:14:53.594230378 +0000 UTC m=+0.222280603,LastTimestamp:2026-01-23 08:14:53.594230378 +0000 UTC m=+0.222280603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.599550 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.599596 4860 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.599656 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:54:02.24292906 +0000 UTC Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.599898 4860 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.599919 4860 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.600032 4860 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 08:14:53 crc kubenswrapper[4860]: E0123 08:14:53.600135 4860 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:14:53 crc kubenswrapper[4860]: E0123 08:14:53.600711 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.70:6443: connect: connection refused" interval="200ms" Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.600710 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.70:6443: connect: connection refused Jan 23 08:14:53 crc kubenswrapper[4860]: E0123 08:14:53.601078 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.70:6443: connect: connection refused" logger="UnhandledError" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.600922 4860 factory.go:55] Registering systemd factory Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.601254 4860 factory.go:221] Registration of the systemd container factory successfully Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.603187 4860 factory.go:153] Registering CRI-O factory Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.603235 4860 factory.go:221] Registration of the crio container factory successfully Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.603367 4860 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.603399 4860 factory.go:103] Registering Raw factory Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.603424 4860 manager.go:1196] Started watching for new ooms in manager Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.608980 4860 manager.go:319] Starting recovery of all containers Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613074 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613144 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613161 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613180 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613196 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613212 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613230 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613247 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613265 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613284 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613300 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613320 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613337 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613367 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613382 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613423 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613439 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613455 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613470 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613551 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613569 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613588 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613608 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613626 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613642 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613661 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613687 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613745 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613767 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613784 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613802 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613820 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613845 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613881 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613900 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613918 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613935 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613955 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613972 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.613989 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614005 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614048 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614070 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614118 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614136 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614154 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614171 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614187 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614204 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614219 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614244 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614264 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614289 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614306 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614324 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614341 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614359 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614377 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614420 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614437 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614454 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614472 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614489 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614507 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614523 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614543 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614562 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614584 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614601 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614618 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614635 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614653 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614673 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614699 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614717 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614735 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614753 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614772 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614790 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614811 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614829 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614847 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614868 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614887 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614903 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614919 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614949 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614966 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.614985 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615002 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615042 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615061 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615078 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615095 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615113 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615133 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615152 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615170 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615189 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615211 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615230 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615248 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615266 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615294 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615358 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615387 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615407 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615427 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615448 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615466 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615487 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615510 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615532 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615549 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615566 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615592 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615609 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615626 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615644 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615663 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615679 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615730 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615748 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615767 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615785 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615802 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615819 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615836 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615855 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615872 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615891 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615911 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.615981 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616007 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616078 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616111 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616131 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616149 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616167 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616218 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616237 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616252 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616271 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616290 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616308 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616328 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616348 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616368 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616385 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616404 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616425 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616442 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616459 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.616477 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617117 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617139 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617157 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617175 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617196 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617222 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617241 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617260 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617278 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617300 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617318 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617338 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617357 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617375 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617395 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617490 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617511 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617532 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617551 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617570 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617587 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617608 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617626 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617692 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617715 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.617737 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619464 4860 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619555 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619595 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619627 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619654 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619680 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619701 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619727 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619752 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619774 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619797 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619818 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619842 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619887 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619924 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619965 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.619998 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.620078 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.620110 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.620144 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.620177 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.620205 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.620231 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.620258 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.620284 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.620312 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.620340 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.620370 4860 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.620397 4860 reconstruct.go:97] "Volume reconstruction finished" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.620416 4860 reconciler.go:26] "Reconciler: start to sync state" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.630677 4860 manager.go:324] Recovery completed Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.643753 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.645746 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.645798 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.645812 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.647657 4860 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.647679 4860 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.647764 4860 state_mem.go:36] "Initialized new in-memory state store" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.654111 4860 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.656338 4860 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.656429 4860 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.656463 4860 kubelet.go:2335] "Starting kubelet main sync loop" Jan 23 08:14:53 crc kubenswrapper[4860]: E0123 08:14:53.656523 4860 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 08:14:53 crc kubenswrapper[4860]: E0123 08:14:53.701125 4860 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:14:53 crc kubenswrapper[4860]: E0123 08:14:53.757264 4860 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 08:14:53 crc kubenswrapper[4860]: W0123 08:14:53.763717 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.70:6443: connect: connection refused Jan 23 08:14:53 crc kubenswrapper[4860]: E0123 08:14:53.763855 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.70:6443: connect: connection refused" logger="UnhandledError" Jan 23 08:14:53 crc kubenswrapper[4860]: E0123 08:14:53.801394 4860 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:14:53 crc kubenswrapper[4860]: E0123 08:14:53.801934 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.70:6443: connect: connection refused" interval="400ms" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.811302 4860 policy_none.go:49] "None policy: Start" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.814981 4860 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.815053 4860 state_mem.go:35] "Initializing new in-memory state store" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.872336 4860 manager.go:334] "Starting Device Plugin manager" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.872458 4860 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.872473 4860 server.go:79] "Starting device plugin registration server" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.872932 4860 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.872950 4860 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.873356 4860 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.873536 4860 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.873564 4860 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 08:14:53 crc kubenswrapper[4860]: E0123 08:14:53.881477 4860 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.957959 4860 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.958189 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.959743 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.959781 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.959793 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.959934 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.960229 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.960294 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.961327 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.961364 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.961376 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.961384 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.961420 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.961438 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.961470 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.961844 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.961878 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.962197 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.962219 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.962228 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.962369 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.962482 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.962508 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.963318 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.963358 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.963369 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.963416 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.963445 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.963458 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.963691 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.963737 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.963750 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.963911 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.964069 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.964098 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.964759 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.964782 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.964792 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.964849 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.964864 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.964874 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.965087 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.965117 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.965692 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.965714 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.965721 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.973356 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.974610 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.974666 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.974682 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:53 crc kubenswrapper[4860]: I0123 08:14:53.974721 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 08:14:53 crc kubenswrapper[4860]: E0123 08:14:53.975751 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.70:6443: connect: connection refused" node="crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.024843 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.024882 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.024906 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.024925 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.024945 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.024966 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.024987 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.025000 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.025013 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.025131 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.025190 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.025214 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.025235 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.025300 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.025334 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.126932 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127005 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127079 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127114 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127124 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127147 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127181 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127220 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127226 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127233 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127146 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127346 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127262 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127280 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127260 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127282 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127489 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127609 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127621 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127693 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127713 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127730 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127739 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127779 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127803 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127817 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127821 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127848 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127892 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.127859 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.176869 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.178379 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.178429 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.178445 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.178477 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 08:14:54 crc kubenswrapper[4860]: E0123 08:14:54.179092 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.70:6443: connect: connection refused" node="crc" Jan 23 08:14:54 crc kubenswrapper[4860]: E0123 08:14:54.196813 4860 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.70:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d4e17fee6de6a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:14:53.594230378 +0000 UTC m=+0.222280603,LastTimestamp:2026-01-23 08:14:53.594230378 +0000 UTC m=+0.222280603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:14:54 crc kubenswrapper[4860]: E0123 08:14:54.203717 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.70:6443: connect: connection refused" interval="800ms" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.284784 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.301741 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: W0123 08:14:54.313597 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-171c8fd6e4e1dc6324c4345d56f48595db27719c802888fe1cf73911cd025f3d WatchSource:0}: Error finding container 171c8fd6e4e1dc6324c4345d56f48595db27719c802888fe1cf73911cd025f3d: Status 404 returned error can't find the container with id 171c8fd6e4e1dc6324c4345d56f48595db27719c802888fe1cf73911cd025f3d Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.322012 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: W0123 08:14:54.338813 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-8b51b3b08516f48f57d474e0311514cf02177df535ddd810b8e578719806adc7 WatchSource:0}: Error finding container 8b51b3b08516f48f57d474e0311514cf02177df535ddd810b8e578719806adc7: Status 404 returned error can't find the container with id 8b51b3b08516f48f57d474e0311514cf02177df535ddd810b8e578719806adc7 Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.343075 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.348740 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 08:14:54 crc kubenswrapper[4860]: W0123 08:14:54.357635 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-60435b9b6139f345ce29c7c27148fe1f83710fec62132846603f73c2cea267a2 WatchSource:0}: Error finding container 60435b9b6139f345ce29c7c27148fe1f83710fec62132846603f73c2cea267a2: Status 404 returned error can't find the container with id 60435b9b6139f345ce29c7c27148fe1f83710fec62132846603f73c2cea267a2 Jan 23 08:14:54 crc kubenswrapper[4860]: W0123 08:14:54.365228 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-24e2f862e23cc8de19236c16e699da905e8113cb8329f8859eba5ce57c2471d1 WatchSource:0}: Error finding container 24e2f862e23cc8de19236c16e699da905e8113cb8329f8859eba5ce57c2471d1: Status 404 returned error can't find the container with id 24e2f862e23cc8de19236c16e699da905e8113cb8329f8859eba5ce57c2471d1 Jan 23 08:14:54 crc kubenswrapper[4860]: W0123 08:14:54.415984 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.70:6443: connect: connection refused Jan 23 08:14:54 crc kubenswrapper[4860]: E0123 08:14:54.416114 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.70:6443: connect: connection refused" logger="UnhandledError" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.579685 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.581582 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.581645 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.581662 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.581701 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 08:14:54 crc kubenswrapper[4860]: E0123 08:14:54.582419 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.70:6443: connect: connection refused" node="crc" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.595683 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.70:6443: connect: connection refused Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.600804 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:24:34.115647934 +0000 UTC Jan 23 08:14:54 crc kubenswrapper[4860]: W0123 08:14:54.607579 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.70:6443: connect: connection refused Jan 23 08:14:54 crc kubenswrapper[4860]: E0123 08:14:54.607682 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.70:6443: connect: connection refused" logger="UnhandledError" Jan 23 08:14:54 crc kubenswrapper[4860]: W0123 08:14:54.797900 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.70:6443: connect: connection refused Jan 23 08:14:54 crc kubenswrapper[4860]: E0123 08:14:54.798080 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.70:6443: connect: connection refused" logger="UnhandledError" Jan 23 08:14:54 crc kubenswrapper[4860]: W0123 08:14:54.868648 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.70:6443: connect: connection refused Jan 23 08:14:54 crc kubenswrapper[4860]: E0123 08:14:54.868759 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.70:6443: connect: connection refused" logger="UnhandledError" Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.913095 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8b51b3b08516f48f57d474e0311514cf02177df535ddd810b8e578719806adc7"} Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.914894 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"00eac2a32f0c79340004f910766260432e0ae0db4d5a64120363ac4aa9609d67"} Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.916738 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"171c8fd6e4e1dc6324c4345d56f48595db27719c802888fe1cf73911cd025f3d"} Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.918147 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"24e2f862e23cc8de19236c16e699da905e8113cb8329f8859eba5ce57c2471d1"} Jan 23 08:14:54 crc kubenswrapper[4860]: I0123 08:14:54.920385 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"60435b9b6139f345ce29c7c27148fe1f83710fec62132846603f73c2cea267a2"} Jan 23 08:14:55 crc kubenswrapper[4860]: E0123 08:14:55.005772 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.70:6443: connect: connection refused" interval="1.6s" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.383492 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.385331 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.385395 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.385412 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.385451 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 08:14:55 crc kubenswrapper[4860]: E0123 08:14:55.386077 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.70:6443: connect: connection refused" node="crc" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.595562 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.70:6443: connect: connection refused Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.600919 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:10:15.53790942 +0000 UTC Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.687842 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 08:14:55 crc kubenswrapper[4860]: E0123 08:14:55.688986 4860 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.70:6443: connect: connection refused" logger="UnhandledError" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.924651 4860 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1d5cd6c1ee3ad7e9d15c6ff7a300bf6107270f904d70061b4088f2fff062e002" exitCode=0 Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.924757 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1d5cd6c1ee3ad7e9d15c6ff7a300bf6107270f904d70061b4088f2fff062e002"} Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.924781 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.926606 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.926648 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.926669 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.929648 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="337b50bc66ae3d8675c0fde1b6a5845a2075c9918ba76f1cb5b78d0b88950107" exitCode=0 Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.929714 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.929767 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"337b50bc66ae3d8675c0fde1b6a5845a2075c9918ba76f1cb5b78d0b88950107"} Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.930536 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.930582 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.930596 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.932234 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.933532 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.933572 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.933588 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.934870 4860 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1" exitCode=0 Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.934952 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1"} Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.935005 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.936254 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.936293 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.936305 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.937255 4860 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="af1d44a9be015c6332027559e1173df3149cbff28a779bc7715f908a580c49da" exitCode=0 Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.937307 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.937307 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"af1d44a9be015c6332027559e1173df3149cbff28a779bc7715f908a580c49da"} Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.938175 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.938212 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.938234 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.939687 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"69242e228dc2df050a6c0235bccbcdfcf6f2666775efb9b8604de2cfb5561962"} Jan 23 08:14:55 crc kubenswrapper[4860]: I0123 08:14:55.939758 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6bab43a399d0257c60af69194f91cc43ac9e523076d37022a4a9342963ef171e"} Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.595580 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.70:6443: connect: connection refused Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.601520 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:28:01.462117184 +0000 UTC Jan 23 08:14:56 crc kubenswrapper[4860]: E0123 08:14:56.607232 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.70:6443: connect: connection refused" interval="3.2s" Jan 23 08:14:56 crc kubenswrapper[4860]: W0123 08:14:56.622829 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.70:6443: connect: connection refused Jan 23 08:14:56 crc kubenswrapper[4860]: E0123 08:14:56.622891 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.70:6443: connect: connection refused" logger="UnhandledError" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.945062 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8d6a2ff8a31950b72b3a7637f295e3f0f883eb4d7666a1eb40096a9e9f03404f"} Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.945112 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3632adc0645fae9a326eb8f9bf054be81f66e83b412af10be9b772465135cdef"} Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.945123 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3d8fe777336b9346c6d9bdd4ec1405d6dcb20141bbc9f5b450a18b3f6b83978d"} Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.945135 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"20babb29aa25bb46e00019494de21d2df6dbe1ffbb0d4bd682a34a188e63db7b"} Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.946232 4860 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735" exitCode=0 Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.946273 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735"} Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.946336 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.948122 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.948149 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.948158 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.948736 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.948735 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c825017ce6ca9a97b5276a601b72847d67850c788dc6e2581363537037174b15"} Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.949436 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.949477 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.949487 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.951095 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9da4391d5a9c90025c0a338dc57dadb9da39301412f08a76f61ddb127a9fd932"} Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.951116 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2cd768905b7c37e39f62a3645b9e3a5aa51890716c528d575c82d84aa8247dfc"} Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.951125 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.951671 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.951694 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.951704 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.952964 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"edbc3e5caba6a596b78184c16cf8538bf77fb67b7ae4a0ec1c3433749860116b"} Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.952989 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2333dd4bfa4617d6a344b55dede8bd4afbe7cdd16becad25912fdf3741427fbb"} Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.953002 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c757d346a444380b0e22ec42192848a5a69800bc6d99573d01bad48a8ae21b45"} Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.953063 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.953734 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.953764 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.953774 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.986888 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.988220 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.988261 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.988273 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:56 crc kubenswrapper[4860]: I0123 08:14:56.988297 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.601650 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 05:34:18.570383603 +0000 UTC Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.957160 4860 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570" exitCode=0 Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.957312 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.957354 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570"} Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.958096 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.958123 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.958133 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.960550 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"751647bc9957e1cca376d924ef87f40dd0bb6be81aade5db1b6f50c13c33f7d2"} Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.960597 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.960632 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.960734 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.960753 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.960807 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.961733 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.961782 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.961795 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.962162 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.962199 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.962215 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.962290 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.962312 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.962321 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.962404 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.962414 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:57 crc kubenswrapper[4860]: I0123 08:14:57.962421 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.019461 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.188680 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.602415 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 02:39:22.963969921 +0000 UTC Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.604642 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.967743 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.967792 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.967831 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.967929 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1fcf55fb8da43397f914c29d398e4874942d63d675d09254a45c3cfa977b080f"} Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.967831 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.968178 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3dbfe55c5d196226c9262083a7e88f1cb455c455494cd644cb22acc18ba0c94e"} Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.968310 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7d13250b5ccd0a197193c93e5df2bd6f5c72dcab476ec7685bfa8afc9bf6d558"} Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.968331 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0d5c54880bc40303e88ebc425cb27262cc4a16be534de991ed9f849a7a54a67b"} Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.968891 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.968922 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.968939 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.969549 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.969594 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.969611 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.969643 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.969767 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:58 crc kubenswrapper[4860]: I0123 08:14:58.969781 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:59 crc kubenswrapper[4860]: I0123 08:14:59.603094 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:53:15.161814629 +0000 UTC Jan 23 08:14:59 crc kubenswrapper[4860]: I0123 08:14:59.731335 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 08:14:59 crc kubenswrapper[4860]: I0123 08:14:59.811074 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:14:59 crc kubenswrapper[4860]: I0123 08:14:59.974616 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a36cef30c0d2209eab0365ff3d017c4ca1892e81ca86cd3d6c62e2de2bacf8be"} Jan 23 08:14:59 crc kubenswrapper[4860]: I0123 08:14:59.974683 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 08:14:59 crc kubenswrapper[4860]: I0123 08:14:59.974697 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:59 crc kubenswrapper[4860]: I0123 08:14:59.974736 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:14:59 crc kubenswrapper[4860]: I0123 08:14:59.975857 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:59 crc kubenswrapper[4860]: I0123 08:14:59.975895 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:59 crc kubenswrapper[4860]: I0123 08:14:59.975907 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:14:59 crc kubenswrapper[4860]: I0123 08:14:59.975861 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:14:59 crc kubenswrapper[4860]: I0123 08:14:59.976080 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:14:59 crc kubenswrapper[4860]: I0123 08:14:59.976101 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:00 crc kubenswrapper[4860]: I0123 08:15:00.604204 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 20:54:24.439499615 +0000 UTC Jan 23 08:15:00 crc kubenswrapper[4860]: I0123 08:15:00.977657 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:15:00 crc kubenswrapper[4860]: I0123 08:15:00.977749 4860 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 08:15:00 crc kubenswrapper[4860]: I0123 08:15:00.977785 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:15:00 crc kubenswrapper[4860]: I0123 08:15:00.977891 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:15:00 crc kubenswrapper[4860]: I0123 08:15:00.978075 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:15:00 crc kubenswrapper[4860]: I0123 08:15:00.979464 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:00 crc kubenswrapper[4860]: I0123 08:15:00.979523 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:00 crc kubenswrapper[4860]: I0123 08:15:00.979548 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:00 crc kubenswrapper[4860]: I0123 08:15:00.979473 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:00 crc kubenswrapper[4860]: I0123 08:15:00.979633 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:00 crc kubenswrapper[4860]: I0123 08:15:00.979571 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:00 crc kubenswrapper[4860]: I0123 08:15:00.979680 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:00 crc kubenswrapper[4860]: I0123 08:15:00.979659 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:00 crc kubenswrapper[4860]: I0123 08:15:00.979691 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:00 crc kubenswrapper[4860]: I0123 08:15:00.982330 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:15:01 crc kubenswrapper[4860]: I0123 08:15:01.604472 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 07:30:01.469240231 +0000 UTC Jan 23 08:15:01 crc kubenswrapper[4860]: I0123 08:15:01.737285 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:15:01 crc kubenswrapper[4860]: I0123 08:15:01.980165 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:15:01 crc kubenswrapper[4860]: I0123 08:15:01.980209 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:15:01 crc kubenswrapper[4860]: I0123 08:15:01.981493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:01 crc kubenswrapper[4860]: I0123 08:15:01.981523 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:01 crc kubenswrapper[4860]: I0123 08:15:01.981531 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:01 crc kubenswrapper[4860]: I0123 08:15:01.981539 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:01 crc kubenswrapper[4860]: I0123 08:15:01.981574 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:01 crc kubenswrapper[4860]: I0123 08:15:01.981590 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:02 crc kubenswrapper[4860]: I0123 08:15:02.605549 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 21:37:48.874563423 +0000 UTC Jan 23 08:15:03 crc kubenswrapper[4860]: I0123 08:15:03.532163 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 23 08:15:03 crc kubenswrapper[4860]: I0123 08:15:03.532420 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:15:03 crc kubenswrapper[4860]: I0123 08:15:03.534342 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:03 crc kubenswrapper[4860]: I0123 08:15:03.534420 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:03 crc kubenswrapper[4860]: I0123 08:15:03.534450 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:03 crc kubenswrapper[4860]: I0123 08:15:03.606137 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 21:10:37.976740992 +0000 UTC Jan 23 08:15:03 crc kubenswrapper[4860]: E0123 08:15:03.881577 4860 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 08:15:04 crc kubenswrapper[4860]: I0123 08:15:04.606653 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 22:48:56.1994295 +0000 UTC Jan 23 08:15:04 crc kubenswrapper[4860]: I0123 08:15:04.708530 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 23 08:15:04 crc kubenswrapper[4860]: I0123 08:15:04.708805 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:15:04 crc kubenswrapper[4860]: I0123 08:15:04.710828 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:04 crc kubenswrapper[4860]: I0123 08:15:04.710883 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:04 crc kubenswrapper[4860]: I0123 08:15:04.710904 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:04 crc kubenswrapper[4860]: I0123 08:15:04.752720 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:15:04 crc kubenswrapper[4860]: I0123 08:15:04.752992 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:15:04 crc kubenswrapper[4860]: I0123 08:15:04.755323 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:04 crc kubenswrapper[4860]: I0123 08:15:04.755408 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:04 crc kubenswrapper[4860]: I0123 08:15:04.755437 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:05 crc kubenswrapper[4860]: I0123 08:15:05.607318 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 19:38:58.793976975 +0000 UTC Jan 23 08:15:05 crc kubenswrapper[4860]: I0123 08:15:05.624872 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:15:05 crc kubenswrapper[4860]: I0123 08:15:05.625180 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:15:05 crc kubenswrapper[4860]: I0123 08:15:05.627782 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:05 crc kubenswrapper[4860]: I0123 08:15:05.627897 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:05 crc kubenswrapper[4860]: I0123 08:15:05.627956 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:05 crc kubenswrapper[4860]: I0123 08:15:05.631726 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:15:05 crc kubenswrapper[4860]: I0123 08:15:05.992328 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:15:05 crc kubenswrapper[4860]: I0123 08:15:05.993601 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:05 crc kubenswrapper[4860]: I0123 08:15:05.993664 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:05 crc kubenswrapper[4860]: I0123 08:15:05.993682 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:06 crc kubenswrapper[4860]: I0123 08:15:06.608354 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 05:07:00.549108852 +0000 UTC Jan 23 08:15:06 crc kubenswrapper[4860]: W0123 08:15:06.888290 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 23 08:15:06 crc kubenswrapper[4860]: I0123 08:15:06.888439 4860 trace.go:236] Trace[858338315]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 08:14:56.886) (total time: 10001ms): Jan 23 08:15:06 crc kubenswrapper[4860]: Trace[858338315]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:15:06.888) Jan 23 08:15:06 crc kubenswrapper[4860]: Trace[858338315]: [10.001484023s] [10.001484023s] END Jan 23 08:15:06 crc kubenswrapper[4860]: E0123 08:15:06.888471 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 08:15:06 crc kubenswrapper[4860]: E0123 08:15:06.989450 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 23 08:15:07 crc kubenswrapper[4860]: W0123 08:15:07.073272 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 23 08:15:07 crc kubenswrapper[4860]: I0123 08:15:07.073380 4860 trace.go:236] Trace[1756826620]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 08:14:57.072) (total time: 10001ms): Jan 23 08:15:07 crc kubenswrapper[4860]: Trace[1756826620]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:15:07.073) Jan 23 08:15:07 crc kubenswrapper[4860]: Trace[1756826620]: [10.001277046s] [10.001277046s] END Jan 23 08:15:07 crc kubenswrapper[4860]: E0123 08:15:07.073406 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 08:15:07 crc kubenswrapper[4860]: W0123 08:15:07.402038 4860 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 23 08:15:07 crc kubenswrapper[4860]: I0123 08:15:07.402126 4860 trace.go:236] Trace[403681080]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 08:14:57.400) (total time: 10001ms): Jan 23 08:15:07 crc kubenswrapper[4860]: Trace[403681080]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:15:07.402) Jan 23 08:15:07 crc kubenswrapper[4860]: Trace[403681080]: [10.001784999s] [10.001784999s] END Jan 23 08:15:07 crc kubenswrapper[4860]: E0123 08:15:07.402145 4860 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 08:15:07 crc kubenswrapper[4860]: I0123 08:15:07.596873 4860 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 23 08:15:07 crc kubenswrapper[4860]: I0123 08:15:07.609282 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:04:34.722820705 +0000 UTC Jan 23 08:15:08 crc kubenswrapper[4860]: I0123 08:15:08.032903 4860 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 08:15:08 crc kubenswrapper[4860]: I0123 08:15:08.032967 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 08:15:08 crc kubenswrapper[4860]: I0123 08:15:08.037350 4860 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 08:15:08 crc kubenswrapper[4860]: I0123 08:15:08.037423 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 08:15:08 crc kubenswrapper[4860]: I0123 08:15:08.610128 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:57:34.579313452 +0000 UTC Jan 23 08:15:08 crc kubenswrapper[4860]: I0123 08:15:08.625767 4860 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 08:15:08 crc kubenswrapper[4860]: I0123 08:15:08.626096 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 08:15:09 crc kubenswrapper[4860]: I0123 08:15:09.611077 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:16:21.912054957 +0000 UTC Jan 23 08:15:09 crc kubenswrapper[4860]: I0123 08:15:09.817467 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:15:09 crc kubenswrapper[4860]: I0123 08:15:09.817884 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:15:09 crc kubenswrapper[4860]: I0123 08:15:09.818847 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:09 crc kubenswrapper[4860]: I0123 08:15:09.818905 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:09 crc kubenswrapper[4860]: I0123 08:15:09.818933 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:09 crc kubenswrapper[4860]: I0123 08:15:09.822839 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:15:10 crc kubenswrapper[4860]: I0123 08:15:10.004109 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:15:10 crc kubenswrapper[4860]: I0123 08:15:10.005643 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:10 crc kubenswrapper[4860]: I0123 08:15:10.005802 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:10 crc kubenswrapper[4860]: I0123 08:15:10.005990 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:10 crc kubenswrapper[4860]: I0123 08:15:10.190133 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:15:10 crc kubenswrapper[4860]: I0123 08:15:10.191392 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:10 crc kubenswrapper[4860]: I0123 08:15:10.191420 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:10 crc kubenswrapper[4860]: I0123 08:15:10.191430 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:10 crc kubenswrapper[4860]: I0123 08:15:10.191453 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 08:15:10 crc kubenswrapper[4860]: E0123 08:15:10.195248 4860 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 23 08:15:10 crc kubenswrapper[4860]: I0123 08:15:10.611916 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 00:13:41.145106766 +0000 UTC Jan 23 08:15:11 crc kubenswrapper[4860]: I0123 08:15:11.613066 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 19:10:38.370966921 +0000 UTC Jan 23 08:15:12 crc kubenswrapper[4860]: I0123 08:15:12.208941 4860 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 08:15:12 crc kubenswrapper[4860]: I0123 08:15:12.613583 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:22:05.959003302 +0000 UTC Jan 23 08:15:12 crc kubenswrapper[4860]: I0123 08:15:12.643476 4860 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.032892 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.034911 4860 trace.go:236] Trace[1598210071]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 08:15:01.841) (total time: 11193ms): Jan 23 08:15:13 crc kubenswrapper[4860]: Trace[1598210071]: ---"Objects listed" error: 11193ms (08:15:13.034) Jan 23 08:15:13 crc kubenswrapper[4860]: Trace[1598210071]: [11.193640656s] [11.193640656s] END Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.034948 4860 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.036169 4860 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.041349 4860 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.067661 4860 csr.go:261] certificate signing request csr-sfpl8 is approved, waiting to be issued Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.075762 4860 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54776->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.075841 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54776->192.168.126.11:17697: read: connection reset by peer" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.076279 4860 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.076351 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.076718 4860 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.076775 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.078671 4860 csr.go:257] certificate signing request csr-sfpl8 is issued Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.182653 4860 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.514470 4860 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 08:15:13 crc kubenswrapper[4860]: W0123 08:15:13.514710 4860 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 08:15:13 crc kubenswrapper[4860]: W0123 08:15:13.514738 4860 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 08:15:13 crc kubenswrapper[4860]: W0123 08:15:13.514714 4860 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.514727 4860 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.129.56.70:51916->38.129.56.70:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d4e182c8cb901 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:14:54.360074497 +0000 UTC m=+0.988124682,LastTimestamp:2026-01-23 08:14:54.360074497 +0000 UTC m=+0.988124682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.568322 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.600370 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.600580 4860 apiserver.go:52] "Watching apiserver" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.605280 4860 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.605588 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-whd8d"] Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.605946 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.605990 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.606008 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.606104 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.606159 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.606294 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.606638 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.606719 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-whd8d" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.606772 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.606873 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.609199 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.610527 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.610811 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.610928 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.610962 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.611274 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.612738 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.612793 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.614220 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.614279 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.614387 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.614466 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 19:28:12.790610295 +0000 UTC Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.628829 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.645941 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.662550 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.672418 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.682392 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.696277 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.703581 4860 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.715420 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.727420 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.740612 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.740669 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.740694 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.740719 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.740749 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.740772 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.740796 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.740818 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.740840 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.740863 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.740886 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.740910 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.740932 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.740952 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.740975 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.740995 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741033 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741060 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741101 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741125 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741148 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741170 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741158 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741191 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741170 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741173 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741221 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741274 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741338 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741362 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741409 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741421 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741433 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741479 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741505 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741528 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741571 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741596 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741614 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741644 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741670 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741686 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741736 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741866 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741942 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741981 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742006 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742046 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742084 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742193 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742242 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742270 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742425 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742456 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742477 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742487 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742519 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742622 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742654 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742668 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742824 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.741695 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742857 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742875 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742890 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742932 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742957 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.742979 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743000 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743043 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743067 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743090 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743111 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743132 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743161 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743184 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743224 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743248 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743273 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743315 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743338 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743128 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743360 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743330 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743385 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743408 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743433 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743458 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743483 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743509 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743531 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743532 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743555 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743560 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743582 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743615 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743627 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743655 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743682 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743734 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743776 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743808 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743831 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743834 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743858 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743883 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743906 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743932 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743954 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743974 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.743994 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744030 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744054 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744077 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744099 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744121 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744141 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744161 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744184 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744206 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744230 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744264 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744287 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744295 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744310 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744338 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744362 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744384 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744404 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744427 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744452 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744464 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744476 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744501 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744546 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744552 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744588 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744613 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744618 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744640 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744671 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744697 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744736 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744766 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744776 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744792 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744822 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744846 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744869 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746249 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746371 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746473 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746544 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746620 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746694 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746760 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746833 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746906 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746975 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.747096 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.747168 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.747463 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.747541 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.747607 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.747668 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.747734 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.747819 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.747888 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.747960 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748059 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748127 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748198 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748267 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748335 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748402 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748469 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748541 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748606 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748671 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748734 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748804 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748875 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748946 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749052 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749136 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749214 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749297 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749379 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749528 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749598 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749639 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749671 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749922 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749950 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749972 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749993 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750065 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750096 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750120 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750140 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750161 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750183 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750217 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750253 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750295 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750325 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750382 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750409 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750431 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750449 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750470 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750493 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750512 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750540 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750569 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750592 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.744961 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.745565 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746106 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746112 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746184 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.745977 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746254 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746644 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746704 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746874 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746936 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.746970 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.747039 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.747622 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.747842 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748052 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748217 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748235 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748319 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748394 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748331 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748628 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748723 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748756 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748795 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.748848 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749093 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749119 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749762 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749797 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.749880 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.752115 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.752255 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.752238 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.752529 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.752676 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.752728 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.752857 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.752938 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.753041 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.753120 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.753111 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.753328 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.753436 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.753875 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.753910 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.753974 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.754731 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.754990 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.755039 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.755157 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.755348 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.755440 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.755380 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.755758 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.756095 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.757086 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.757226 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.757533 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.758271 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.758500 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.758580 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.758954 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.759083 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.759379 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.759643 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.759672 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.759643 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.759714 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.759933 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.760047 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.760113 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.760227 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.760243 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.760449 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.760609 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.760948 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.761214 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.761308 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.750611 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.761379 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.761590 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.761582 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.761620 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.761735 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.761895 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.761922 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.761940 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.761961 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.761986 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762009 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762059 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762080 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762101 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762125 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762206 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762230 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762253 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762270 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762274 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762324 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762342 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762347 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762489 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762567 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762529 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762630 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762656 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.762681 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:14.262657542 +0000 UTC m=+20.890707917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762716 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.762734 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762753 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.762816 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:14.262791915 +0000 UTC m=+20.890842100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763428 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763459 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763477 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763510 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763537 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763565 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763595 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763620 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763649 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763659 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763683 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c7a15168-b99c-4784-8874-d94f2d27b404-hosts-file\") pod \"node-resolver-whd8d\" (UID: \"c7a15168-b99c-4784-8874-d94f2d27b404\") " pod="openshift-dns/node-resolver-whd8d" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763717 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763750 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763775 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppnpv\" (UniqueName: \"kubernetes.io/projected/c7a15168-b99c-4784-8874-d94f2d27b404-kube-api-access-ppnpv\") pod \"node-resolver-whd8d\" (UID: \"c7a15168-b99c-4784-8874-d94f2d27b404\") " pod="openshift-dns/node-resolver-whd8d" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763947 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763972 4860 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763977 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763992 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764007 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764043 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764057 4860 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764067 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764073 4860 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764104 4860 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764119 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764134 4860 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764149 4860 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764162 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764289 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764306 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764319 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764332 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764343 4860 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764356 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764367 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764379 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764393 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764405 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764420 4860 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764434 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764449 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764463 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764476 4860 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764490 4860 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764503 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764519 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764532 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764545 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764559 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764572 4860 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764585 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764598 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764624 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764638 4860 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764650 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764665 4860 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764677 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764690 4860 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764704 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764716 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764730 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764745 4860 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764760 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764773 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764786 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764797 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764809 4860 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764820 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764832 4860 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764843 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764854 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764865 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764877 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764888 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764899 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764912 4860 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764923 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764935 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764946 4860 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764982 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765010 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.764988 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765067 4860 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765086 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.765085 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765139 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.765167 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:14.265146684 +0000 UTC m=+20.893196869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765181 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765194 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765205 4860 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765215 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765225 4860 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765235 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762989 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.762997 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.763268 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765246 4860 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765757 4860 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765791 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765823 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765843 4860 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765857 4860 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765874 4860 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765899 4860 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765929 4860 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765945 4860 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765960 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765974 4860 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765988 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.766002 4860 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.765986 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.766040 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.766347 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.766370 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.766386 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.766402 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.766414 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.766429 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.766442 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.766458 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.766471 4860 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.766489 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.766502 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.766821 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767044 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767073 4860 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767088 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767103 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767116 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767130 4860 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767195 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767213 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767283 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767297 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767311 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767325 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767338 4860 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767351 4860 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767365 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767384 4860 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767399 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767413 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767472 4860 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767492 4860 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767608 4860 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767781 4860 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767950 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.767998 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.768027 4860 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.768107 4860 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.768122 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.768135 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.768149 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.768163 4860 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.770694 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.770864 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.771477 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.775535 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.775613 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.775962 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.776060 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.776585 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.778577 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.778635 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.778848 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.779199 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.779279 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.779298 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.779216 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:14.279187353 +0000 UTC m=+20.907237748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:13 crc kubenswrapper[4860]: E0123 08:15:13.779414 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:14.279382418 +0000 UTC m=+20.907432603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.780771 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.780793 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.780878 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.780882 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.781250 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.781575 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.781775 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.781970 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.781981 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.782245 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.782306 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.782440 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.783165 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aca8298-9a2d-41c6-97e9-3a94da362bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d13250b5ccd0a197193c93e5df2bd6f5c72dcab476ec7685bfa8afc9bf6d558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dbfe55c5d196226c9262083a7e88f1cb455c455494cd644cb22acc18ba0c94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fcf55fb8da43397f914c29d398e4874942d63d675d09254a45c3cfa977b080f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a36cef30c0d2209eab0365ff3d017c4ca1892e81ca86cd3d6c62e2de2bacf8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5c54880bc40303e88ebc425cb27262cc4a16be534de991ed9f849a7a54a67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.783718 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.785482 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.785689 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.785824 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.786351 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.787336 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.791497 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.791609 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.785058 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.792718 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.792821 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.793058 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.793256 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.793758 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-whd8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7a15168-b99c-4784-8874-d94f2d27b404\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppnpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-whd8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.794030 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.794146 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.794175 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.794330 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.794480 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.794557 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.794612 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.794616 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.794929 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.795523 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.795674 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.795757 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.797417 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.797835 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.798826 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.798888 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.799173 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.799187 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.801745 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.801956 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.802107 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.802394 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.802407 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.802511 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.802635 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.802722 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.802738 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.802767 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.802953 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.806231 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.819903 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.822132 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.827720 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.831292 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.838334 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aca8298-9a2d-41c6-97e9-3a94da362bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d13250b5ccd0a197193c93e5df2bd6f5c72dcab476ec7685bfa8afc9bf6d558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dbfe55c5d196226c9262083a7e88f1cb455c455494cd644cb22acc18ba0c94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fcf55fb8da43397f914c29d398e4874942d63d675d09254a45c3cfa977b080f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a36cef30c0d2209eab0365ff3d017c4ca1892e81ca86cd3d6c62e2de2bacf8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5c54880bc40303e88ebc425cb27262cc4a16be534de991ed9f849a7a54a67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.850502 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.860218 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869063 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869115 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c7a15168-b99c-4784-8874-d94f2d27b404-hosts-file\") pod \"node-resolver-whd8d\" (UID: \"c7a15168-b99c-4784-8874-d94f2d27b404\") " pod="openshift-dns/node-resolver-whd8d" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869151 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869177 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppnpv\" (UniqueName: \"kubernetes.io/projected/c7a15168-b99c-4784-8874-d94f2d27b404-kube-api-access-ppnpv\") pod \"node-resolver-whd8d\" (UID: \"c7a15168-b99c-4784-8874-d94f2d27b404\") " pod="openshift-dns/node-resolver-whd8d" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869238 4860 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869255 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869268 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869280 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869294 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869306 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869317 4860 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869328 4860 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869340 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869351 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869363 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869376 4860 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869388 4860 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869400 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869412 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869424 4860 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869437 4860 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869449 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869460 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869474 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869486 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869498 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869510 4860 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869521 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869533 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869544 4860 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869555 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869569 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869580 4860 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869592 4860 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869605 4860 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869617 4860 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869630 4860 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869643 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869655 4860 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869668 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869679 4860 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869690 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869702 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869713 4860 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869736 4860 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869748 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869760 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869772 4860 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869784 4860 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869797 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869811 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869822 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869833 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869846 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869857 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869868 4860 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869880 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869891 4860 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869903 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869914 4860 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869925 4860 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869936 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869950 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869962 4860 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869976 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.869988 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.870000 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.870030 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.870044 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.870055 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.870385 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.870435 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c7a15168-b99c-4784-8874-d94f2d27b404-hosts-file\") pod \"node-resolver-whd8d\" (UID: \"c7a15168-b99c-4784-8874-d94f2d27b404\") " pod="openshift-dns/node-resolver-whd8d" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.870467 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.878347 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.890001 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppnpv\" (UniqueName: \"kubernetes.io/projected/c7a15168-b99c-4784-8874-d94f2d27b404-kube-api-access-ppnpv\") pod \"node-resolver-whd8d\" (UID: \"c7a15168-b99c-4784-8874-d94f2d27b404\") " pod="openshift-dns/node-resolver-whd8d" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.893584 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.919463 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-whd8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7a15168-b99c-4784-8874-d94f2d27b404\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppnpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-whd8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.922802 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.933222 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 08:15:13 crc kubenswrapper[4860]: W0123 08:15:13.939077 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-60ae05e07c6ffaa65bb5cd45cefd686d2fab05bda7232680cbff3434320f18f3 WatchSource:0}: Error finding container 60ae05e07c6ffaa65bb5cd45cefd686d2fab05bda7232680cbff3434320f18f3: Status 404 returned error can't find the container with id 60ae05e07c6ffaa65bb5cd45cefd686d2fab05bda7232680cbff3434320f18f3 Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.940196 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 08:15:13 crc kubenswrapper[4860]: I0123 08:15:13.944463 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-whd8d" Jan 23 08:15:13 crc kubenswrapper[4860]: W0123 08:15:13.956606 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-c63701f82f5268536db4359fb864db5d56ecb7b4c29142868ee7666629caa8f2 WatchSource:0}: Error finding container c63701f82f5268536db4359fb864db5d56ecb7b4c29142868ee7666629caa8f2: Status 404 returned error can't find the container with id c63701f82f5268536db4359fb864db5d56ecb7b4c29142868ee7666629caa8f2 Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.016833 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.019450 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="751647bc9957e1cca376d924ef87f40dd0bb6be81aade5db1b6f50c13c33f7d2" exitCode=255 Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.019518 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"751647bc9957e1cca376d924ef87f40dd0bb6be81aade5db1b6f50c13c33f7d2"} Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.021830 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-whd8d" event={"ID":"c7a15168-b99c-4784-8874-d94f2d27b404","Type":"ContainerStarted","Data":"5b44ecf5a268843081d0b7c507ca0177d4e42b3a96f9e9461c2b5874d39b572f"} Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.022876 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f67d233824a1dd8e3351827e66f119a1782a2ab9e425c40391546318622b5c5a"} Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.024684 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c63701f82f5268536db4359fb864db5d56ecb7b4c29142868ee7666629caa8f2"} Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.025883 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"60ae05e07c6ffaa65bb5cd45cefd686d2fab05bda7232680cbff3434320f18f3"} Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.028540 4860 scope.go:117] "RemoveContainer" containerID="751647bc9957e1cca376d924ef87f40dd0bb6be81aade5db1b6f50c13c33f7d2" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.028871 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.029027 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.038681 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:14 crc kubenswrapper[4860]: E0123 08:15:14.039041 4860 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.057309 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aca8298-9a2d-41c6-97e9-3a94da362bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d13250b5ccd0a197193c93e5df2bd6f5c72dcab476ec7685bfa8afc9bf6d558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dbfe55c5d196226c9262083a7e88f1cb455c455494cd644cb22acc18ba0c94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fcf55fb8da43397f914c29d398e4874942d63d675d09254a45c3cfa977b080f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a36cef30c0d2209eab0365ff3d017c4ca1892e81ca86cd3d6c62e2de2bacf8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5c54880bc40303e88ebc425cb27262cc4a16be534de991ed9f849a7a54a67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.067978 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.080046 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-23 08:10:13 +0000 UTC, rotation deadline is 2026-12-11 06:20:18.653936624 +0000 UTC Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.080110 4860 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7726h5m4.5738286s for next certificate rotation Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.087057 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.096727 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.105779 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.114575 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-whd8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7a15168-b99c-4784-8874-d94f2d27b404\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppnpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-whd8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.276259 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.276339 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.276359 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:14 crc kubenswrapper[4860]: E0123 08:15:14.276466 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:15:14 crc kubenswrapper[4860]: E0123 08:15:14.276490 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:15.276466375 +0000 UTC m=+21.904516570 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:14 crc kubenswrapper[4860]: E0123 08:15:14.276544 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:15.276532017 +0000 UTC m=+21.904582202 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:15:14 crc kubenswrapper[4860]: E0123 08:15:14.276483 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:15:14 crc kubenswrapper[4860]: E0123 08:15:14.276614 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:15.276604698 +0000 UTC m=+21.904654963 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.377808 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.377872 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:14 crc kubenswrapper[4860]: E0123 08:15:14.378058 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:15:14 crc kubenswrapper[4860]: E0123 08:15:14.378088 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:15:14 crc kubenswrapper[4860]: E0123 08:15:14.378105 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:14 crc kubenswrapper[4860]: E0123 08:15:14.378058 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:15:14 crc kubenswrapper[4860]: E0123 08:15:14.378191 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:15:14 crc kubenswrapper[4860]: E0123 08:15:14.378221 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:14 crc kubenswrapper[4860]: E0123 08:15:14.378166 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:15.378146233 +0000 UTC m=+22.006196428 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:14 crc kubenswrapper[4860]: E0123 08:15:14.378288 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:15.378269216 +0000 UTC m=+22.006319401 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.615423 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:58:03.75733184 +0000 UTC Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.656685 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:14 crc kubenswrapper[4860]: E0123 08:15:14.656827 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.802988 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-tk8df"] Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.803430 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.803457 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-b55cn"] Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.804290 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-b55cn" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.806365 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.806508 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.806664 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-4k855"] Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.806679 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.806985 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.807267 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.807353 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.807478 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.807484 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.807546 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.807585 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.809243 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.809647 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 08:15:14 crc kubenswrapper[4860]: I0123 08:15:14.816363 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081dccf3-546f-41d3-bd98-ce1b0bbe037e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnqhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnqhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tk8df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.077635 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.079530 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-run-k8s-cni-cncf-io\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.079627 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgq2x\" (UniqueName: \"kubernetes.io/projected/c3224b07-df3e-4f30-9d73-cf34290cfecb-kube-api-access-lgq2x\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.079732 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-cnibin\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.079803 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3224b07-df3e-4f30-9d73-cf34290cfecb-cni-binary-copy\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.079896 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/081dccf3-546f-41d3-bd98-ce1b0bbe037e-proxy-tls\") pod \"machine-config-daemon-tk8df\" (UID: \"081dccf3-546f-41d3-bd98-ce1b0bbe037e\") " pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.080047 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-cni-binary-copy\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.081058 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzmmm\" (UniqueName: \"kubernetes.io/projected/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-kube-api-access-vzmmm\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.081164 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-hostroot\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.081224 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-multus-conf-dir\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.081278 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-system-cni-dir\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.081334 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-system-cni-dir\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.081412 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.081822 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-var-lib-cni-multus\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.081871 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-run-netns\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.081895 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-os-release\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.081918 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/081dccf3-546f-41d3-bd98-ce1b0bbe037e-mcd-auth-proxy-config\") pod \"machine-config-daemon-tk8df\" (UID: \"081dccf3-546f-41d3-bd98-ce1b0bbe037e\") " pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.081937 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnqhv\" (UniqueName: \"kubernetes.io/projected/081dccf3-546f-41d3-bd98-ce1b0bbe037e-kube-api-access-pnqhv\") pod \"machine-config-daemon-tk8df\" (UID: \"081dccf3-546f-41d3-bd98-ce1b0bbe037e\") " pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.081961 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-os-release\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.081986 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.082004 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-var-lib-kubelet\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.082062 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-multus-cni-dir\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.082081 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-cnibin\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.082100 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-var-lib-cni-bin\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.082118 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-etc-kubernetes\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.082151 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-multus-socket-dir-parent\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.082171 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-run-multus-certs\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.082190 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/081dccf3-546f-41d3-bd98-ce1b0bbe037e-rootfs\") pod \"machine-config-daemon-tk8df\" (UID: \"081dccf3-546f-41d3-bd98-ce1b0bbe037e\") " pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.082208 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c3224b07-df3e-4f30-9d73-cf34290cfecb-multus-daemon-config\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.097301 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.117959 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.141006 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aca8298-9a2d-41c6-97e9-3a94da362bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d13250b5ccd0a197193c93e5df2bd6f5c72dcab476ec7685bfa8afc9bf6d558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dbfe55c5d196226c9262083a7e88f1cb455c455494cd644cb22acc18ba0c94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fcf55fb8da43397f914c29d398e4874942d63d675d09254a45c3cfa977b080f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a36cef30c0d2209eab0365ff3d017c4ca1892e81ca86cd3d6c62e2de2bacf8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5c54880bc40303e88ebc425cb27262cc4a16be534de991ed9f849a7a54a67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.155055 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5qv8z"] Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.156086 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.160561 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.160584 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.160763 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.160820 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.160847 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.160926 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.161005 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.165034 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183375 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-var-lib-kubelet\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183423 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183454 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-ovn\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183475 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovnkube-script-lib\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183498 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-cnibin\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183522 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-var-lib-cni-bin\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183511 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-var-lib-kubelet\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183591 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-etc-kubernetes\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183625 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-cnibin\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183652 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-node-log\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183685 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-etc-kubernetes\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183688 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-multus-cni-dir\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183735 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-multus-socket-dir-parent\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183739 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-var-lib-cni-bin\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183758 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-run-multus-certs\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183767 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-multus-cni-dir\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183787 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-openvswitch\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183802 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-multus-socket-dir-parent\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183831 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-run-multus-certs\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183861 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/081dccf3-546f-41d3-bd98-ce1b0bbe037e-rootfs\") pod \"machine-config-daemon-tk8df\" (UID: \"081dccf3-546f-41d3-bd98-ce1b0bbe037e\") " pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183903 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c3224b07-df3e-4f30-9d73-cf34290cfecb-multus-daemon-config\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183904 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/081dccf3-546f-41d3-bd98-ce1b0bbe037e-rootfs\") pod \"machine-config-daemon-tk8df\" (UID: \"081dccf3-546f-41d3-bd98-ce1b0bbe037e\") " pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.183949 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgq2x\" (UniqueName: \"kubernetes.io/projected/c3224b07-df3e-4f30-9d73-cf34290cfecb-kube-api-access-lgq2x\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184013 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-systemd-units\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184053 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-cni-bin\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184092 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-run-k8s-cni-cncf-io\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184123 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-cnibin\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184169 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3224b07-df3e-4f30-9d73-cf34290cfecb-cni-binary-copy\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184191 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-run-netns\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184340 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-log-socket\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184373 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-run-ovn-kubernetes\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184387 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-run-k8s-cni-cncf-io\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184437 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/081dccf3-546f-41d3-bd98-ce1b0bbe037e-proxy-tls\") pod \"machine-config-daemon-tk8df\" (UID: \"081dccf3-546f-41d3-bd98-ce1b0bbe037e\") " pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184401 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-cnibin\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184484 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-cni-binary-copy\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184597 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzmmm\" (UniqueName: \"kubernetes.io/projected/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-kube-api-access-vzmmm\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184675 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c3224b07-df3e-4f30-9d73-cf34290cfecb-multus-daemon-config\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184718 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-slash\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184912 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-cni-netd\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184944 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-hostroot\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.184969 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-multus-conf-dir\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185039 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-system-cni-dir\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185067 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185090 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-kubelet\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185112 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-env-overrides\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185174 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovn-node-metrics-cert\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185202 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-system-cni-dir\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185225 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-var-lib-cni-multus\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185248 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-etc-openvswitch\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185263 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-cni-binary-copy\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185277 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/081dccf3-546f-41d3-bd98-ce1b0bbe037e-mcd-auth-proxy-config\") pod \"machine-config-daemon-tk8df\" (UID: \"081dccf3-546f-41d3-bd98-ce1b0bbe037e\") " pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185303 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnqhv\" (UniqueName: \"kubernetes.io/projected/081dccf3-546f-41d3-bd98-ce1b0bbe037e-kube-api-access-pnqhv\") pod \"machine-config-daemon-tk8df\" (UID: \"081dccf3-546f-41d3-bd98-ce1b0bbe037e\") " pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185327 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-os-release\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185349 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-run-netns\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185371 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-os-release\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185394 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-systemd\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185449 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-system-cni-dir\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185486 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-var-lib-cni-multus\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185633 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c3224b07-df3e-4f30-9d73-cf34290cfecb-cni-binary-copy\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185680 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-multus-conf-dir\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185705 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-hostroot\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185726 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-system-cni-dir\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185766 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-host-run-netns\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185776 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185952 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-os-release\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.185981 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovnkube-config\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.186045 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c3224b07-df3e-4f30-9d73-cf34290cfecb-os-release\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.186069 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-var-lib-openvswitch\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.186096 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89szh\" (UniqueName: \"kubernetes.io/projected/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-kube-api-access-89szh\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.186127 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.186135 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/081dccf3-546f-41d3-bd98-ce1b0bbe037e-mcd-auth-proxy-config\") pod \"machine-config-daemon-tk8df\" (UID: \"081dccf3-546f-41d3-bd98-ce1b0bbe037e\") " pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.186416 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.195754 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.196281 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/081dccf3-546f-41d3-bd98-ce1b0bbe037e-proxy-tls\") pod \"machine-config-daemon-tk8df\" (UID: \"081dccf3-546f-41d3-bd98-ce1b0bbe037e\") " pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.205879 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnqhv\" (UniqueName: \"kubernetes.io/projected/081dccf3-546f-41d3-bd98-ce1b0bbe037e-kube-api-access-pnqhv\") pod \"machine-config-daemon-tk8df\" (UID: \"081dccf3-546f-41d3-bd98-ce1b0bbe037e\") " pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.213808 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgq2x\" (UniqueName: \"kubernetes.io/projected/c3224b07-df3e-4f30-9d73-cf34290cfecb-kube-api-access-lgq2x\") pod \"multus-b55cn\" (UID: \"c3224b07-df3e-4f30-9d73-cf34290cfecb\") " pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.216814 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.240303 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.265723 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4f3198-2504-4e7f-83bc-c050b3eee2f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20babb29aa25bb46e00019494de21d2df6dbe1ffbb0d4bd682a34a188e63db7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632adc0645fae9a326eb8f9bf054be81f66e83b412af10be9b772465135cdef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8fe777336b9346c6d9bdd4ec1405d6dcb20141bbc9f5b450a18b3f6b83978d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://751647bc9957e1cca376d924ef87f40dd0bb6be81aade5db1b6f50c13c33f7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://751647bc9957e1cca376d924ef87f40dd0bb6be81aade5db1b6f50c13c33f7d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 08:15:07.539783 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 08:15:07.542901 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1688305104/tls.crt::/tmp/serving-cert-1688305104/tls.key\\\\\\\"\\\\nI0123 08:15:13.043233 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 08:15:13.048857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 08:15:13.048900 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 08:15:13.048934 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 08:15:13.048941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 08:15:13.056229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 08:15:13.056258 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 08:15:13.056265 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 08:15:13.056271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 08:15:13.056275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 08:15:13.056280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 08:15:13.056284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 08:15:13.056560 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 08:15:13.063055 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d6a2ff8a31950b72b3a7637f295e3f0f883eb4d7666a1eb40096a9e9f03404f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://337b50bc66ae3d8675c0fde1b6a5845a2075c9918ba76f1cb5b78d0b88950107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337b50bc66ae3d8675c0fde1b6a5845a2075c9918ba76f1cb5b78d0b88950107\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.275080 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-whd8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7a15168-b99c-4784-8874-d94f2d27b404\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppnpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-whd8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.284307 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4f3198-2504-4e7f-83bc-c050b3eee2f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20babb29aa25bb46e00019494de21d2df6dbe1ffbb0d4bd682a34a188e63db7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632adc0645fae9a326eb8f9bf054be81f66e83b412af10be9b772465135cdef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8fe777336b9346c6d9bdd4ec1405d6dcb20141bbc9f5b450a18b3f6b83978d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://751647bc9957e1cca376d924ef87f40dd0bb6be81aade5db1b6f50c13c33f7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://751647bc9957e1cca376d924ef87f40dd0bb6be81aade5db1b6f50c13c33f7d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 08:15:07.539783 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 08:15:07.542901 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1688305104/tls.crt::/tmp/serving-cert-1688305104/tls.key\\\\\\\"\\\\nI0123 08:15:13.043233 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 08:15:13.048857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 08:15:13.048900 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 08:15:13.048934 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 08:15:13.048941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 08:15:13.056229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 08:15:13.056258 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 08:15:13.056265 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 08:15:13.056271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 08:15:13.056275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 08:15:13.056280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 08:15:13.056284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 08:15:13.056560 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 08:15:13.063055 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d6a2ff8a31950b72b3a7637f295e3f0f883eb4d7666a1eb40096a9e9f03404f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://337b50bc66ae3d8675c0fde1b6a5845a2075c9918ba76f1cb5b78d0b88950107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337b50bc66ae3d8675c0fde1b6a5845a2075c9918ba76f1cb5b78d0b88950107\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.286783 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.286951 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.286985 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-openvswitch\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287044 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-systemd-units\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287068 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-cni-bin\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287230 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-run-ovn-kubernetes\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287269 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-run-netns\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287286 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-log-socket\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287321 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-slash\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287338 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-cni-netd\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287356 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-kubelet\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287374 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-env-overrides\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287379 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-systemd-units\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287391 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovn-node-metrics-cert\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287392 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-run-netns\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287411 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-etc-openvswitch\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287418 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-slash\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287440 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-etc-openvswitch\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287460 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-log-socket\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287468 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-systemd\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287469 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-kubelet\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: E0123 08:15:15.287484 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287495 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-cni-netd\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287442 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-systemd\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: E0123 08:15:15.287496 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:17.287479029 +0000 UTC m=+23.915529214 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287496 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-run-ovn-kubernetes\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287474 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-cni-bin\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: E0123 08:15:15.287541 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:17.2875329 +0000 UTC m=+23.915583085 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287488 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-openvswitch\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287559 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovnkube-config\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287628 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-var-lib-openvswitch\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287647 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89szh\" (UniqueName: \"kubernetes.io/projected/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-kube-api-access-89szh\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287683 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-var-lib-openvswitch\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287666 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287714 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287729 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287750 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-ovn\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287770 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovnkube-script-lib\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287793 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-node-log\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: E0123 08:15:15.287796 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287842 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-node-log\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.287818 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-ovn\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: E0123 08:15:15.287859 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:17.287844937 +0000 UTC m=+23.915895172 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.288156 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-env-overrides\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.288441 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovnkube-script-lib\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.288444 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovnkube-config\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.289591 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzmmm\" (UniqueName: \"kubernetes.io/projected/2e33feb5-d4ac-4ca7-96ce-260dbd32e192-kube-api-access-vzmmm\") pod \"multus-additional-cni-plugins-4k855\" (UID: \"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\") " pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.290871 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovn-node-metrics-cert\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.307123 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aca8298-9a2d-41c6-97e9-3a94da362bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d13250b5ccd0a197193c93e5df2bd6f5c72dcab476ec7685bfa8afc9bf6d558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dbfe55c5d196226c9262083a7e88f1cb455c455494cd644cb22acc18ba0c94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fcf55fb8da43397f914c29d398e4874942d63d675d09254a45c3cfa977b080f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a36cef30c0d2209eab0365ff3d017c4ca1892e81ca86cd3d6c62e2de2bacf8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5c54880bc40303e88ebc425cb27262cc4a16be534de991ed9f849a7a54a67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.313138 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89szh\" (UniqueName: \"kubernetes.io/projected/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-kube-api-access-89szh\") pod \"ovnkube-node-5qv8z\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.317318 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.325581 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-whd8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7a15168-b99c-4784-8874-d94f2d27b404\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppnpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-whd8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.335826 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.345761 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.357770 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081dccf3-546f-41d3-bd98-ce1b0bbe037e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnqhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnqhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tk8df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.369805 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4k855" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4k855\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.378465 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.382568 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.388579 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.388646 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:15 crc kubenswrapper[4860]: E0123 08:15:15.388755 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:15:15 crc kubenswrapper[4860]: E0123 08:15:15.388782 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:15:15 crc kubenswrapper[4860]: E0123 08:15:15.388792 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:15 crc kubenswrapper[4860]: E0123 08:15:15.388812 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:15:15 crc kubenswrapper[4860]: E0123 08:15:15.388852 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:15:15 crc kubenswrapper[4860]: E0123 08:15:15.388866 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:15 crc kubenswrapper[4860]: E0123 08:15:15.388832 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:17.388819408 +0000 UTC m=+24.016869593 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:15 crc kubenswrapper[4860]: E0123 08:15:15.388968 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:17.388941631 +0000 UTC m=+24.016991876 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:15 crc kubenswrapper[4860]: W0123 08:15:15.389844 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod081dccf3_546f_41d3_bd98_ce1b0bbe037e.slice/crio-8f289d58a09c38e548576a1f87a8f4ec8f45ae8d17b7cfa27694ef7feca473d1 WatchSource:0}: Error finding container 8f289d58a09c38e548576a1f87a8f4ec8f45ae8d17b7cfa27694ef7feca473d1: Status 404 returned error can't find the container with id 8f289d58a09c38e548576a1f87a8f4ec8f45ae8d17b7cfa27694ef7feca473d1 Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.389962 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-b55cn" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.395385 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: W0123 08:15:15.402457 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3224b07_df3e_4f30_9d73_cf34290cfecb.slice/crio-f2d3e1b2a21a448168dfdff1ad4042047e3ff9c8b6788ea5820d2ec0294e0a0c WatchSource:0}: Error finding container f2d3e1b2a21a448168dfdff1ad4042047e3ff9c8b6788ea5820d2ec0294e0a0c: Status 404 returned error can't find the container with id f2d3e1b2a21a448168dfdff1ad4042047e3ff9c8b6788ea5820d2ec0294e0a0c Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.406869 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.407110 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4k855" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.417931 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b55cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3224b07-df3e-4f30-9d73-cf34290cfecb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b55cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.436734 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qv8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.473358 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.584100 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:15:15 crc kubenswrapper[4860]: W0123 08:15:15.590672 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd28d122d_d793_4f09_9f3d_00a5a5b93e6b.slice/crio-9114060d576b68de31b6aa3507ad517bbb40df73e4ee846daf5fd33045b88b64 WatchSource:0}: Error finding container 9114060d576b68de31b6aa3507ad517bbb40df73e4ee846daf5fd33045b88b64: Status 404 returned error can't find the container with id 9114060d576b68de31b6aa3507ad517bbb40df73e4ee846daf5fd33045b88b64 Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.615733 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 00:39:03.939496603 +0000 UTC Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.628998 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.638285 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.641718 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.649350 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.657555 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:15 crc kubenswrapper[4860]: E0123 08:15:15.657661 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.657756 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:15 crc kubenswrapper[4860]: E0123 08:15:15.657824 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.662155 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.662711 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.663583 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.664293 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.664950 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.666497 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.667110 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.667668 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.668958 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.669505 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.671108 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.671797 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.672140 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.673275 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.674002 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.674657 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.675231 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.676042 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.676465 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.677923 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.679160 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.679885 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.681446 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.681911 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.683009 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.683488 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.684448 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081dccf3-546f-41d3-bd98-ce1b0bbe037e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnqhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnqhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tk8df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.684980 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.685794 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.686379 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.687548 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.688276 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.689285 4860 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.689397 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.695150 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.695923 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.697266 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.699160 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.699886 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.702200 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.703152 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.703337 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4k855" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4k855\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.704362 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.704869 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.705576 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.706883 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.707944 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.708583 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.711045 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.711778 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.713201 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.713825 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.714755 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.715272 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.715790 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.716865 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.717471 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.720485 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b55cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3224b07-df3e-4f30-9d73-cf34290cfecb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b55cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.738829 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qv8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.753672 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.766584 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.782177 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.798867 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-whd8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7a15168-b99c-4784-8874-d94f2d27b404\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppnpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-whd8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.813891 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4f3198-2504-4e7f-83bc-c050b3eee2f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20babb29aa25bb46e00019494de21d2df6dbe1ffbb0d4bd682a34a188e63db7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632adc0645fae9a326eb8f9bf054be81f66e83b412af10be9b772465135cdef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8fe777336b9346c6d9bdd4ec1405d6dcb20141bbc9f5b450a18b3f6b83978d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://751647bc9957e1cca376d924ef87f40dd0bb6be81aade5db1b6f50c13c33f7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://751647bc9957e1cca376d924ef87f40dd0bb6be81aade5db1b6f50c13c33f7d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 08:15:07.539783 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 08:15:07.542901 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1688305104/tls.crt::/tmp/serving-cert-1688305104/tls.key\\\\\\\"\\\\nI0123 08:15:13.043233 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 08:15:13.048857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 08:15:13.048900 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 08:15:13.048934 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 08:15:13.048941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 08:15:13.056229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 08:15:13.056258 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 08:15:13.056265 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 08:15:13.056271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 08:15:13.056275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 08:15:13.056280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 08:15:13.056284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 08:15:13.056560 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 08:15:13.063055 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d6a2ff8a31950b72b3a7637f295e3f0f883eb4d7666a1eb40096a9e9f03404f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://337b50bc66ae3d8675c0fde1b6a5845a2075c9918ba76f1cb5b78d0b88950107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337b50bc66ae3d8675c0fde1b6a5845a2075c9918ba76f1cb5b78d0b88950107\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.840616 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aca8298-9a2d-41c6-97e9-3a94da362bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d13250b5ccd0a197193c93e5df2bd6f5c72dcab476ec7685bfa8afc9bf6d558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dbfe55c5d196226c9262083a7e88f1cb455c455494cd644cb22acc18ba0c94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fcf55fb8da43397f914c29d398e4874942d63d675d09254a45c3cfa977b080f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a36cef30c0d2209eab0365ff3d017c4ca1892e81ca86cd3d6c62e2de2bacf8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5c54880bc40303e88ebc425cb27262cc4a16be534de991ed9f849a7a54a67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.854964 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.867210 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4f3198-2504-4e7f-83bc-c050b3eee2f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20babb29aa25bb46e00019494de21d2df6dbe1ffbb0d4bd682a34a188e63db7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632adc0645fae9a326eb8f9bf054be81f66e83b412af10be9b772465135cdef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8fe777336b9346c6d9bdd4ec1405d6dcb20141bbc9f5b450a18b3f6b83978d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://751647bc9957e1cca376d924ef87f40dd0bb6be81aade5db1b6f50c13c33f7d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://751647bc9957e1cca376d924ef87f40dd0bb6be81aade5db1b6f50c13c33f7d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 08:15:07.539783 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 08:15:07.542901 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1688305104/tls.crt::/tmp/serving-cert-1688305104/tls.key\\\\\\\"\\\\nI0123 08:15:13.043233 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 08:15:13.048857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 08:15:13.048900 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 08:15:13.048934 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 08:15:13.048941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 08:15:13.056229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 08:15:13.056258 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 08:15:13.056265 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 08:15:13.056271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 08:15:13.056275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 08:15:13.056280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 08:15:13.056284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 08:15:13.056560 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 08:15:13.063055 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d6a2ff8a31950b72b3a7637f295e3f0f883eb4d7666a1eb40096a9e9f03404f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://337b50bc66ae3d8675c0fde1b6a5845a2075c9918ba76f1cb5b78d0b88950107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337b50bc66ae3d8675c0fde1b6a5845a2075c9918ba76f1cb5b78d0b88950107\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.897222 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aca8298-9a2d-41c6-97e9-3a94da362bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d13250b5ccd0a197193c93e5df2bd6f5c72dcab476ec7685bfa8afc9bf6d558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dbfe55c5d196226c9262083a7e88f1cb455c455494cd644cb22acc18ba0c94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fcf55fb8da43397f914c29d398e4874942d63d675d09254a45c3cfa977b080f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a36cef30c0d2209eab0365ff3d017c4ca1892e81ca86cd3d6c62e2de2bacf8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5c54880bc40303e88ebc425cb27262cc4a16be534de991ed9f849a7a54a67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.919837 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.934082 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-whd8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7a15168-b99c-4784-8874-d94f2d27b404\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppnpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-whd8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.947416 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.971215 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:15 crc kubenswrapper[4860]: I0123 08:15:15.992958 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081dccf3-546f-41d3-bd98-ce1b0bbe037e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnqhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnqhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tk8df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:15Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.010922 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4k855" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4k855\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.023305 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca1a207-8e22-4b0d-a7d2-c8738c390542\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69242e228dc2df050a6c0235bccbcdfcf6f2666775efb9b8604de2cfb5561962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bab43a399d0257c60af69194f91cc43ac9e523076d37022a4a9342963ef171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd768905b7c37e39f62a3645b9e3a5aa51890716c528d575c82d84aa8247dfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9da4391d5a9c90025c0a338dc57dadb9da39301412f08a76f61ddb127a9fd932\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.046065 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.061644 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.078796 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.091434 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.095331 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e2a2b7ac6ee5d1e06d3a489bf96c7a611c8e6774cea1778bf5835b61191edf3e"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.095590 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.098359 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b55cn" event={"ID":"c3224b07-df3e-4f30-9d73-cf34290cfecb","Type":"ContainerStarted","Data":"cd465c1d884cf5575bcb88a353eab28fd9c3f63fad448d2531bb6e16950d961c"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.098398 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b55cn" event={"ID":"c3224b07-df3e-4f30-9d73-cf34290cfecb","Type":"ContainerStarted","Data":"f2d3e1b2a21a448168dfdff1ad4042047e3ff9c8b6788ea5820d2ec0294e0a0c"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.100551 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerStarted","Data":"c13e06d64fa120c15909f6241a508dd7f9b15e608bc08595169b17ce0ad434bd"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.100714 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerStarted","Data":"0b3151f70a9f1405c480998d872f2f689c2d81b91f5b9582195941d5aad7277c"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.100783 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerStarted","Data":"8f289d58a09c38e548576a1f87a8f4ec8f45ae8d17b7cfa27694ef7feca473d1"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.105974 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b55cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3224b07-df3e-4f30-9d73-cf34290cfecb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b55cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.106806 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7b0f3da1f71eac3919ff32b5fd85ad376c07f73b6adcd2ce7d5517075d997ef1"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.106923 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5cb7e8aef66c44238dda1759cd06ca7b3df59499e6e98d642c71d013706845cc"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.108576 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"62e14be98b69b1718ad74903dc8eb8df31dedc99436c0280a532af24faaf421a"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.110553 4860 generic.go:334] "Generic (PLEG): container finished" podID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerID="916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9" exitCode=0 Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.110645 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerDied","Data":"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.110695 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerStarted","Data":"9114060d576b68de31b6aa3507ad517bbb40df73e4ee846daf5fd33045b88b64"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.112333 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-whd8d" event={"ID":"c7a15168-b99c-4784-8874-d94f2d27b404","Type":"ContainerStarted","Data":"67984d8ce047d8b2b450cf67e69fc47bf8133716a7ac1dfcabf5542f5ac33a77"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.113967 4860 generic.go:334] "Generic (PLEG): container finished" podID="2e33feb5-d4ac-4ca7-96ce-260dbd32e192" containerID="6809330a7659f37da9fe0f17bfdbe2f86fb037f44bc4b56bea70420e63013e2c" exitCode=0 Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.114525 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4k855" event={"ID":"2e33feb5-d4ac-4ca7-96ce-260dbd32e192","Type":"ContainerDied","Data":"6809330a7659f37da9fe0f17bfdbe2f86fb037f44bc4b56bea70420e63013e2c"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.114563 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4k855" event={"ID":"2e33feb5-d4ac-4ca7-96ce-260dbd32e192","Type":"ContainerStarted","Data":"2ca47342947ab4d3070e57e06f1862313beeb16371425abfff62a42e10247922"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.129711 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qv8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.146405 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b55cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3224b07-df3e-4f30-9d73-cf34290cfecb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd465c1d884cf5575bcb88a353eab28fd9c3f63fad448d2531bb6e16950d961c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lgq2x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b55cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.163964 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89szh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5qv8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.176908 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca1a207-8e22-4b0d-a7d2-c8738c390542\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69242e228dc2df050a6c0235bccbcdfcf6f2666775efb9b8604de2cfb5561962\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bab43a399d0257c60af69194f91cc43ac9e523076d37022a4a9342963ef171e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cd768905b7c37e39f62a3645b9e3a5aa51890716c528d575c82d84aa8247dfc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9da4391d5a9c90025c0a338dc57dadb9da39301412f08a76f61ddb127a9fd932\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.193251 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62e14be98b69b1718ad74903dc8eb8df31dedc99436c0280a532af24faaf421a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.214904 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.262765 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b0f3da1f71eac3919ff32b5fd85ad376c07f73b6adcd2ce7d5517075d997ef1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cb7e8aef66c44238dda1759cd06ca7b3df59499e6e98d642c71d013706845cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.296451 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-whd8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7a15168-b99c-4784-8874-d94f2d27b404\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67984d8ce047d8b2b450cf67e69fc47bf8133716a7ac1dfcabf5542f5ac33a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppnpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-whd8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.322754 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4f3198-2504-4e7f-83bc-c050b3eee2f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20babb29aa25bb46e00019494de21d2df6dbe1ffbb0d4bd682a34a188e63db7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632adc0645fae9a326eb8f9bf054be81f66e83b412af10be9b772465135cdef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8fe777336b9346c6d9bdd4ec1405d6dcb20141bbc9f5b450a18b3f6b83978d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2a2b7ac6ee5d1e06d3a489bf96c7a611c8e6774cea1778bf5835b61191edf3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://751647bc9957e1cca376d924ef87f40dd0bb6be81aade5db1b6f50c13c33f7d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 08:15:07.539783 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 08:15:07.542901 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1688305104/tls.crt::/tmp/serving-cert-1688305104/tls.key\\\\\\\"\\\\nI0123 08:15:13.043233 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 08:15:13.048857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 08:15:13.048900 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 08:15:13.048934 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 08:15:13.048941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 08:15:13.056229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 08:15:13.056258 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 08:15:13.056265 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 08:15:13.056271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 08:15:13.056275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 08:15:13.056280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 08:15:13.056284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 08:15:13.056560 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 08:15:13.063055 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d6a2ff8a31950b72b3a7637f295e3f0f883eb4d7666a1eb40096a9e9f03404f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://337b50bc66ae3d8675c0fde1b6a5845a2075c9918ba76f1cb5b78d0b88950107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337b50bc66ae3d8675c0fde1b6a5845a2075c9918ba76f1cb5b78d0b88950107\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.369257 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aca8298-9a2d-41c6-97e9-3a94da362bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d13250b5ccd0a197193c93e5df2bd6f5c72dcab476ec7685bfa8afc9bf6d558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dbfe55c5d196226c9262083a7e88f1cb455c455494cd644cb22acc18ba0c94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fcf55fb8da43397f914c29d398e4874942d63d675d09254a45c3cfa977b080f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a36cef30c0d2209eab0365ff3d017c4ca1892e81ca86cd3d6c62e2de2bacf8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5c54880bc40303e88ebc425cb27262cc4a16be534de991ed9f849a7a54a67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.385946 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.401221 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.415819 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.425930 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081dccf3-546f-41d3-bd98-ce1b0bbe037e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c13e06d64fa120c15909f6241a508dd7f9b15e608bc08595169b17ce0ad434bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnqhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3151f70a9f1405c480998d872f2f689c2d81b91f5b9582195941d5aad7277c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnqhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tk8df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.439763 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4k855" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6809330a7659f37da9fe0f17bfdbe2f86fb037f44bc4b56bea70420e63013e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6809330a7659f37da9fe0f17bfdbe2f86fb037f44bc4b56bea70420e63013e2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4k855\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.595480 4860 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.597220 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.597262 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.597274 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.597402 4860 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.604990 4860 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.605239 4860 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.606142 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.606182 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.606197 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.606215 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.606228 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:16Z","lastTransitionTime":"2026-01-23T08:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.616221 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:52:15.327457463 +0000 UTC Jan 23 08:15:16 crc kubenswrapper[4860]: E0123 08:15:16.634222 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b04b77da-1d86-4a43-bc9b-6761c86be5d2\\\",\\\"systemUUID\\\":\\\"a3ff46c5-8531-497b-aead-a749b27af7c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.639427 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.639468 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.639478 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.639493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.639502 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:16Z","lastTransitionTime":"2026-01-23T08:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:16 crc kubenswrapper[4860]: E0123 08:15:16.651816 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b04b77da-1d86-4a43-bc9b-6761c86be5d2\\\",\\\"systemUUID\\\":\\\"a3ff46c5-8531-497b-aead-a749b27af7c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.655248 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.655280 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.655290 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.655309 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.655318 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:16Z","lastTransitionTime":"2026-01-23T08:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.656905 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:16 crc kubenswrapper[4860]: E0123 08:15:16.657030 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 08:15:16 crc kubenswrapper[4860]: E0123 08:15:16.667544 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b04b77da-1d86-4a43-bc9b-6761c86be5d2\\\",\\\"systemUUID\\\":\\\"a3ff46c5-8531-497b-aead-a749b27af7c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.671508 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.671547 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.671556 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.671573 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.671584 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:16Z","lastTransitionTime":"2026-01-23T08:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:16 crc kubenswrapper[4860]: E0123 08:15:16.688084 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b04b77da-1d86-4a43-bc9b-6761c86be5d2\\\",\\\"systemUUID\\\":\\\"a3ff46c5-8531-497b-aead-a749b27af7c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.692471 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.692619 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.692687 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.692807 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.692874 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:16Z","lastTransitionTime":"2026-01-23T08:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:16 crc kubenswrapper[4860]: E0123 08:15:16.706696 4860 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b04b77da-1d86-4a43-bc9b-6761c86be5d2\\\",\\\"systemUUID\\\":\\\"a3ff46c5-8531-497b-aead-a749b27af7c5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:16Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:16 crc kubenswrapper[4860]: E0123 08:15:16.707146 4860 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.711010 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.711062 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.711071 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.711084 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.711093 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:16Z","lastTransitionTime":"2026-01-23T08:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.813078 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.813201 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.813316 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.813383 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.813467 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:16Z","lastTransitionTime":"2026-01-23T08:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.915217 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.915247 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.915256 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.915270 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:16 crc kubenswrapper[4860]: I0123 08:15:16.915282 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:16Z","lastTransitionTime":"2026-01-23T08:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.017385 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.017434 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.017445 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.017464 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.017479 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:17Z","lastTransitionTime":"2026-01-23T08:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.068449 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-xgspg"] Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.068850 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xgspg" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.070850 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.071177 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.072323 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.072362 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.083849 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-whd8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7a15168-b99c-4784-8874-d94f2d27b404\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67984d8ce047d8b2b450cf67e69fc47bf8133716a7ac1dfcabf5542f5ac33a77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppnpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-whd8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:17Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.094626 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xgspg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2054c19c-47dc-4e16-b1d2-147e07994dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-76zh2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xgspg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:17Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.103609 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2054c19c-47dc-4e16-b1d2-147e07994dc4-serviceca\") pod \"node-ca-xgspg\" (UID: \"2054c19c-47dc-4e16-b1d2-147e07994dc4\") " pod="openshift-image-registry/node-ca-xgspg" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.103647 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76zh2\" (UniqueName: \"kubernetes.io/projected/2054c19c-47dc-4e16-b1d2-147e07994dc4-kube-api-access-76zh2\") pod \"node-ca-xgspg\" (UID: \"2054c19c-47dc-4e16-b1d2-147e07994dc4\") " pod="openshift-image-registry/node-ca-xgspg" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.103665 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2054c19c-47dc-4e16-b1d2-147e07994dc4-host\") pod \"node-ca-xgspg\" (UID: \"2054c19c-47dc-4e16-b1d2-147e07994dc4\") " pod="openshift-image-registry/node-ca-xgspg" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.112437 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4f3198-2504-4e7f-83bc-c050b3eee2f0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20babb29aa25bb46e00019494de21d2df6dbe1ffbb0d4bd682a34a188e63db7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3632adc0645fae9a326eb8f9bf054be81f66e83b412af10be9b772465135cdef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8fe777336b9346c6d9bdd4ec1405d6dcb20141bbc9f5b450a18b3f6b83978d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2a2b7ac6ee5d1e06d3a489bf96c7a611c8e6774cea1778bf5835b61191edf3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://751647bc9957e1cca376d924ef87f40dd0bb6be81aade5db1b6f50c13c33f7d2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 08:15:07.539783 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 08:15:07.542901 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1688305104/tls.crt::/tmp/serving-cert-1688305104/tls.key\\\\\\\"\\\\nI0123 08:15:13.043233 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 08:15:13.048857 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 08:15:13.048900 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 08:15:13.048934 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 08:15:13.048941 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 08:15:13.056229 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 08:15:13.056258 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 08:15:13.056265 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 08:15:13.056271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 08:15:13.056275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 08:15:13.056280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 08:15:13.056284 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 08:15:13.056560 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 08:15:13.063055 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d6a2ff8a31950b72b3a7637f295e3f0f883eb4d7666a1eb40096a9e9f03404f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://337b50bc66ae3d8675c0fde1b6a5845a2075c9918ba76f1cb5b78d0b88950107\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://337b50bc66ae3d8675c0fde1b6a5845a2075c9918ba76f1cb5b78d0b88950107\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:17Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.119705 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.119765 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.119775 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.119788 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.119797 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:17Z","lastTransitionTime":"2026-01-23T08:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.122316 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3248a7f72ee67676d912e2e1d7158e1c206e18e9b836e4f9bbe4275d5add98bc"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.130773 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerStarted","Data":"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.131037 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerStarted","Data":"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.131109 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerStarted","Data":"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.131173 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerStarted","Data":"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.131234 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerStarted","Data":"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.131330 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerStarted","Data":"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.132388 4860 generic.go:334] "Generic (PLEG): container finished" podID="2e33feb5-d4ac-4ca7-96ce-260dbd32e192" containerID="3519b3fc66e469a49a5790ba0da34da0e0d453adb991ef9e506edf913fd9797b" exitCode=0 Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.132418 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4k855" event={"ID":"2e33feb5-d4ac-4ca7-96ce-260dbd32e192","Type":"ContainerDied","Data":"3519b3fc66e469a49a5790ba0da34da0e0d453adb991ef9e506edf913fd9797b"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.137521 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aca8298-9a2d-41c6-97e9-3a94da362bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:14:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d13250b5ccd0a197193c93e5df2bd6f5c72dcab476ec7685bfa8afc9bf6d558\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dbfe55c5d196226c9262083a7e88f1cb455c455494cd644cb22acc18ba0c94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fcf55fb8da43397f914c29d398e4874942d63d675d09254a45c3cfa977b080f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a36cef30c0d2209eab0365ff3d017c4ca1892e81ca86cd3d6c62e2de2bacf8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5c54880bc40303e88ebc425cb27262cc4a16be534de991ed9f849a7a54a67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0329fcaa342dd70e718625c0c3991d71213abc073d40697382ce6d981a1b37a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cce2693300a590f22be4c4a75cd7e637234794d3a9eb9ca34cb316447527735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39f009bba16f3faf886a29b31318c44373723ac890d899225b66c32025c570\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:14:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:17Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.150827 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:17Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.163044 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:17Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.176365 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:17Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.185819 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081dccf3-546f-41d3-bd98-ce1b0bbe037e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c13e06d64fa120c15909f6241a508dd7f9b15e608bc08595169b17ce0ad434bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnqhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b3151f70a9f1405c480998d872f2f689c2d81b91f5b9582195941d5aad7277c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnqhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tk8df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:17Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.199851 4860 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4k855" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e33feb5-d4ac-4ca7-96ce-260dbd32e192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6809330a7659f37da9fe0f17bfdbe2f86fb037f44bc4b56bea70420e63013e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6809330a7659f37da9fe0f17bfdbe2f86fb037f44bc4b56bea70420e63013e2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:15:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vzmmm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:15:14Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4k855\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T08:15:17Z is after 2025-08-24T17:21:41Z" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.204436 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2054c19c-47dc-4e16-b1d2-147e07994dc4-serviceca\") pod \"node-ca-xgspg\" (UID: \"2054c19c-47dc-4e16-b1d2-147e07994dc4\") " pod="openshift-image-registry/node-ca-xgspg" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.204498 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76zh2\" (UniqueName: \"kubernetes.io/projected/2054c19c-47dc-4e16-b1d2-147e07994dc4-kube-api-access-76zh2\") pod \"node-ca-xgspg\" (UID: \"2054c19c-47dc-4e16-b1d2-147e07994dc4\") " pod="openshift-image-registry/node-ca-xgspg" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.204543 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2054c19c-47dc-4e16-b1d2-147e07994dc4-host\") pod \"node-ca-xgspg\" (UID: \"2054c19c-47dc-4e16-b1d2-147e07994dc4\") " pod="openshift-image-registry/node-ca-xgspg" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.205185 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2054c19c-47dc-4e16-b1d2-147e07994dc4-host\") pod \"node-ca-xgspg\" (UID: \"2054c19c-47dc-4e16-b1d2-147e07994dc4\") " pod="openshift-image-registry/node-ca-xgspg" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.206356 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2054c19c-47dc-4e16-b1d2-147e07994dc4-serviceca\") pod \"node-ca-xgspg\" (UID: \"2054c19c-47dc-4e16-b1d2-147e07994dc4\") " pod="openshift-image-registry/node-ca-xgspg" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.222151 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.222202 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.222216 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.222232 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.222246 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:17Z","lastTransitionTime":"2026-01-23T08:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.224880 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76zh2\" (UniqueName: \"kubernetes.io/projected/2054c19c-47dc-4e16-b1d2-147e07994dc4-kube-api-access-76zh2\") pod \"node-ca-xgspg\" (UID: \"2054c19c-47dc-4e16-b1d2-147e07994dc4\") " pod="openshift-image-registry/node-ca-xgspg" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.245412 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-b55cn" podStartSLOduration=3.245368195 podStartE2EDuration="3.245368195s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:17.22090266 +0000 UTC m=+23.848952845" watchObservedRunningTime="2026-01-23 08:15:17.245368195 +0000 UTC m=+23.873418380" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.278555 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.2785257 podStartE2EDuration="2.2785257s" podCreationTimestamp="2026-01-23 08:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:17.260829475 +0000 UTC m=+23.888879660" watchObservedRunningTime="2026-01-23 08:15:17.2785257 +0000 UTC m=+23.906575885" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.305011 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.305162 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.305192 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:21.30516484 +0000 UTC m=+27.933215025 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.305229 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.305240 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.305281 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:21.305266572 +0000 UTC m=+27.933316747 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.305361 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.305414 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:21.305404266 +0000 UTC m=+27.933454451 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.324148 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.324181 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.324190 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.324203 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.324213 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:17Z","lastTransitionTime":"2026-01-23T08:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.333499 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-whd8d" podStartSLOduration=4.333480882 podStartE2EDuration="4.333480882s" podCreationTimestamp="2026-01-23 08:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:17.320325972 +0000 UTC m=+23.948376157" watchObservedRunningTime="2026-01-23 08:15:17.333480882 +0000 UTC m=+23.961531067" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.349774 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=3.349752362 podStartE2EDuration="3.349752362s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:17.349524226 +0000 UTC m=+23.977574421" watchObservedRunningTime="2026-01-23 08:15:17.349752362 +0000 UTC m=+23.977802547" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.376338 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.37632102 podStartE2EDuration="4.37632102s" podCreationTimestamp="2026-01-23 08:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:17.375325945 +0000 UTC m=+24.003376130" watchObservedRunningTime="2026-01-23 08:15:17.37632102 +0000 UTC m=+24.004371205" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.382232 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xgspg" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.406342 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.406381 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.406479 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.406493 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.406503 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.406505 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.406535 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.406543 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:21.406530981 +0000 UTC m=+28.034581166 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.406547 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.406613 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:21.406597853 +0000 UTC m=+28.034648038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.420145 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podStartSLOduration=3.420129412 podStartE2EDuration="3.420129412s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:17.419730703 +0000 UTC m=+24.047780898" watchObservedRunningTime="2026-01-23 08:15:17.420129412 +0000 UTC m=+24.048179597" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.427003 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.427063 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.427073 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.427089 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.427101 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:17Z","lastTransitionTime":"2026-01-23T08:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:17 crc kubenswrapper[4860]: W0123 08:15:17.445823 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2054c19c_47dc_4e16_b1d2_147e07994dc4.slice/crio-4b7b758311aca008705917f77448f67979b99ff7943952a21925f8ffcf54689b WatchSource:0}: Error finding container 4b7b758311aca008705917f77448f67979b99ff7943952a21925f8ffcf54689b: Status 404 returned error can't find the container with id 4b7b758311aca008705917f77448f67979b99ff7943952a21925f8ffcf54689b Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.529177 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.529510 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.529520 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.529538 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.529550 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:17Z","lastTransitionTime":"2026-01-23T08:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.617352 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 19:21:18.946513236 +0000 UTC Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.632510 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.632559 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.632572 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.632591 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.632603 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:17Z","lastTransitionTime":"2026-01-23T08:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.657294 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.657350 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.657431 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.657475 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.679211 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj"] Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.679670 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.681508 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.681992 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.708877 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d044d9d0-dc1a-4210-a9b4-7229b7b5292b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-g6wfj\" (UID: \"d044d9d0-dc1a-4210-a9b4-7229b7b5292b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.708923 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7xh6\" (UniqueName: \"kubernetes.io/projected/d044d9d0-dc1a-4210-a9b4-7229b7b5292b-kube-api-access-x7xh6\") pod \"ovnkube-control-plane-749d76644c-g6wfj\" (UID: \"d044d9d0-dc1a-4210-a9b4-7229b7b5292b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.709036 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d044d9d0-dc1a-4210-a9b4-7229b7b5292b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-g6wfj\" (UID: \"d044d9d0-dc1a-4210-a9b4-7229b7b5292b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.709069 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d044d9d0-dc1a-4210-a9b4-7229b7b5292b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-g6wfj\" (UID: \"d044d9d0-dc1a-4210-a9b4-7229b7b5292b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.719595 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-vtsqg"] Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.720081 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.720160 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtsqg" podUID="3416422c-d1ef-463b-a846-11d6ea9715c3" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.734166 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.734198 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.734207 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.734219 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.734227 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:17Z","lastTransitionTime":"2026-01-23T08:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.810027 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d044d9d0-dc1a-4210-a9b4-7229b7b5292b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-g6wfj\" (UID: \"d044d9d0-dc1a-4210-a9b4-7229b7b5292b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.810077 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7xh6\" (UniqueName: \"kubernetes.io/projected/d044d9d0-dc1a-4210-a9b4-7229b7b5292b-kube-api-access-x7xh6\") pod \"ovnkube-control-plane-749d76644c-g6wfj\" (UID: \"d044d9d0-dc1a-4210-a9b4-7229b7b5292b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.810111 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d044d9d0-dc1a-4210-a9b4-7229b7b5292b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-g6wfj\" (UID: \"d044d9d0-dc1a-4210-a9b4-7229b7b5292b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.810145 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d044d9d0-dc1a-4210-a9b4-7229b7b5292b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-g6wfj\" (UID: \"d044d9d0-dc1a-4210-a9b4-7229b7b5292b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.810172 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs\") pod \"network-metrics-daemon-vtsqg\" (UID: \"3416422c-d1ef-463b-a846-11d6ea9715c3\") " pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.810205 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbnln\" (UniqueName: \"kubernetes.io/projected/3416422c-d1ef-463b-a846-11d6ea9715c3-kube-api-access-kbnln\") pod \"network-metrics-daemon-vtsqg\" (UID: \"3416422c-d1ef-463b-a846-11d6ea9715c3\") " pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.810731 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d044d9d0-dc1a-4210-a9b4-7229b7b5292b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-g6wfj\" (UID: \"d044d9d0-dc1a-4210-a9b4-7229b7b5292b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.810978 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d044d9d0-dc1a-4210-a9b4-7229b7b5292b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-g6wfj\" (UID: \"d044d9d0-dc1a-4210-a9b4-7229b7b5292b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.815171 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d044d9d0-dc1a-4210-a9b4-7229b7b5292b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-g6wfj\" (UID: \"d044d9d0-dc1a-4210-a9b4-7229b7b5292b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.830137 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7xh6\" (UniqueName: \"kubernetes.io/projected/d044d9d0-dc1a-4210-a9b4-7229b7b5292b-kube-api-access-x7xh6\") pod \"ovnkube-control-plane-749d76644c-g6wfj\" (UID: \"d044d9d0-dc1a-4210-a9b4-7229b7b5292b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.836420 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.836458 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.836471 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.836490 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.836501 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:17Z","lastTransitionTime":"2026-01-23T08:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.911300 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs\") pod \"network-metrics-daemon-vtsqg\" (UID: \"3416422c-d1ef-463b-a846-11d6ea9715c3\") " pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.911377 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbnln\" (UniqueName: \"kubernetes.io/projected/3416422c-d1ef-463b-a846-11d6ea9715c3-kube-api-access-kbnln\") pod \"network-metrics-daemon-vtsqg\" (UID: \"3416422c-d1ef-463b-a846-11d6ea9715c3\") " pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.911485 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:15:17 crc kubenswrapper[4860]: E0123 08:15:17.911565 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs podName:3416422c-d1ef-463b-a846-11d6ea9715c3 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:18.411546706 +0000 UTC m=+25.039596891 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs") pod "network-metrics-daemon-vtsqg" (UID: "3416422c-d1ef-463b-a846-11d6ea9715c3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.927362 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbnln\" (UniqueName: \"kubernetes.io/projected/3416422c-d1ef-463b-a846-11d6ea9715c3-kube-api-access-kbnln\") pod \"network-metrics-daemon-vtsqg\" (UID: \"3416422c-d1ef-463b-a846-11d6ea9715c3\") " pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.939427 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.939469 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.939477 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.939491 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.939500 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:17Z","lastTransitionTime":"2026-01-23T08:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:17 crc kubenswrapper[4860]: I0123 08:15:17.990772 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" Jan 23 08:15:18 crc kubenswrapper[4860]: W0123 08:15:18.008058 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd044d9d0_dc1a_4210_a9b4_7229b7b5292b.slice/crio-9d5156908af0a23897ee43e2f92461195a6a73d072c777c8b1c33ad7254c7252 WatchSource:0}: Error finding container 9d5156908af0a23897ee43e2f92461195a6a73d072c777c8b1c33ad7254c7252: Status 404 returned error can't find the container with id 9d5156908af0a23897ee43e2f92461195a6a73d072c777c8b1c33ad7254c7252 Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.043244 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.043289 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.043303 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.043339 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.043354 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:18Z","lastTransitionTime":"2026-01-23T08:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.138587 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xgspg" event={"ID":"2054c19c-47dc-4e16-b1d2-147e07994dc4","Type":"ContainerStarted","Data":"cb6a01dcfcd2fef71041b849385b9bb1315e5365154884ee1a2c1e2753ecc8dc"} Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.138641 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xgspg" event={"ID":"2054c19c-47dc-4e16-b1d2-147e07994dc4","Type":"ContainerStarted","Data":"4b7b758311aca008705917f77448f67979b99ff7943952a21925f8ffcf54689b"} Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.144047 4860 generic.go:334] "Generic (PLEG): container finished" podID="2e33feb5-d4ac-4ca7-96ce-260dbd32e192" containerID="92335bcc37831da53ea1d85f2949ba5e531e9efe18abf7d6bf68485eba2b0611" exitCode=0 Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.144096 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4k855" event={"ID":"2e33feb5-d4ac-4ca7-96ce-260dbd32e192","Type":"ContainerDied","Data":"92335bcc37831da53ea1d85f2949ba5e531e9efe18abf7d6bf68485eba2b0611"} Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.146292 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.146321 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.146329 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.146344 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.146343 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" event={"ID":"d044d9d0-dc1a-4210-a9b4-7229b7b5292b","Type":"ContainerStarted","Data":"9d5156908af0a23897ee43e2f92461195a6a73d072c777c8b1c33ad7254c7252"} Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.146353 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:18Z","lastTransitionTime":"2026-01-23T08:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.156206 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xgspg" podStartSLOduration=4.155609277 podStartE2EDuration="4.155609277s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:18.153596606 +0000 UTC m=+24.781646791" watchObservedRunningTime="2026-01-23 08:15:18.155609277 +0000 UTC m=+24.783659462" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.248526 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.248561 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.248569 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.248584 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.248595 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:18Z","lastTransitionTime":"2026-01-23T08:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.351092 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.351134 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.351145 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.351162 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.351174 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:18Z","lastTransitionTime":"2026-01-23T08:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.416773 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs\") pod \"network-metrics-daemon-vtsqg\" (UID: \"3416422c-d1ef-463b-a846-11d6ea9715c3\") " pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:18 crc kubenswrapper[4860]: E0123 08:15:18.416978 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:15:18 crc kubenswrapper[4860]: E0123 08:15:18.417096 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs podName:3416422c-d1ef-463b-a846-11d6ea9715c3 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:19.417073845 +0000 UTC m=+26.045124040 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs") pod "network-metrics-daemon-vtsqg" (UID: "3416422c-d1ef-463b-a846-11d6ea9715c3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.453853 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.453904 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.453916 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.453936 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.453950 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:18Z","lastTransitionTime":"2026-01-23T08:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.557092 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.557131 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.557143 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.557161 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.557182 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:18Z","lastTransitionTime":"2026-01-23T08:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.618171 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:39:14.409663493 +0000 UTC Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.656622 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:18 crc kubenswrapper[4860]: E0123 08:15:18.656745 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.659212 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.659237 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.659246 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.659256 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.659266 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:18Z","lastTransitionTime":"2026-01-23T08:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.761480 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.761515 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.761527 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.761543 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.761553 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:18Z","lastTransitionTime":"2026-01-23T08:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.864676 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.864734 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.864751 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.864775 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.864794 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:18Z","lastTransitionTime":"2026-01-23T08:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.967280 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.967321 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.967332 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.967347 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:18 crc kubenswrapper[4860]: I0123 08:15:18.967356 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:18Z","lastTransitionTime":"2026-01-23T08:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.069715 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.069956 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.069965 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.069979 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.069988 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:19Z","lastTransitionTime":"2026-01-23T08:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.153314 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerStarted","Data":"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca"} Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.155462 4860 generic.go:334] "Generic (PLEG): container finished" podID="2e33feb5-d4ac-4ca7-96ce-260dbd32e192" containerID="a07652a2dd7647d0a309bd977af8a1f4781868da07b79543d3f7a571081ba0b3" exitCode=0 Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.155513 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4k855" event={"ID":"2e33feb5-d4ac-4ca7-96ce-260dbd32e192","Type":"ContainerDied","Data":"a07652a2dd7647d0a309bd977af8a1f4781868da07b79543d3f7a571081ba0b3"} Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.159371 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" event={"ID":"d044d9d0-dc1a-4210-a9b4-7229b7b5292b","Type":"ContainerStarted","Data":"5b72b95d5fe7f22b8edabfcb8cef7c9da306248ded28443a7b5fcafa5b14539e"} Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.159398 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" event={"ID":"d044d9d0-dc1a-4210-a9b4-7229b7b5292b","Type":"ContainerStarted","Data":"6cb13bac76bd28591dcd90668fb9ceed1baf7025adf98802603cc6736d10d371"} Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.172219 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.172250 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.172260 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.172274 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.172284 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:19Z","lastTransitionTime":"2026-01-23T08:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.197517 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-g6wfj" podStartSLOduration=5.197493751 podStartE2EDuration="5.197493751s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:19.196827943 +0000 UTC m=+25.824878148" watchObservedRunningTime="2026-01-23 08:15:19.197493751 +0000 UTC m=+25.825543946" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.274245 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.274306 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.274318 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.274338 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.274351 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:19Z","lastTransitionTime":"2026-01-23T08:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.376042 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.376079 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.376088 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.376103 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.376113 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:19Z","lastTransitionTime":"2026-01-23T08:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.425325 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs\") pod \"network-metrics-daemon-vtsqg\" (UID: \"3416422c-d1ef-463b-a846-11d6ea9715c3\") " pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:19 crc kubenswrapper[4860]: E0123 08:15:19.425495 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:15:19 crc kubenswrapper[4860]: E0123 08:15:19.425547 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs podName:3416422c-d1ef-463b-a846-11d6ea9715c3 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:21.425534298 +0000 UTC m=+28.053584483 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs") pod "network-metrics-daemon-vtsqg" (UID: "3416422c-d1ef-463b-a846-11d6ea9715c3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.478769 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.478816 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.478832 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.478854 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.478871 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:19Z","lastTransitionTime":"2026-01-23T08:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.582260 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.582357 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.582375 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.582402 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.582420 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:19Z","lastTransitionTime":"2026-01-23T08:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.619152 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 01:36:22.473049258 +0000 UTC Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.657108 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.657177 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.657340 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:19 crc kubenswrapper[4860]: E0123 08:15:19.657333 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 08:15:19 crc kubenswrapper[4860]: E0123 08:15:19.657427 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtsqg" podUID="3416422c-d1ef-463b-a846-11d6ea9715c3" Jan 23 08:15:19 crc kubenswrapper[4860]: E0123 08:15:19.657493 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.687248 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.687306 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.687323 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.687348 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.687367 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:19Z","lastTransitionTime":"2026-01-23T08:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.790992 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.791089 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.791100 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.791115 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.791124 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:19Z","lastTransitionTime":"2026-01-23T08:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.893716 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.893758 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.893784 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.893799 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.893809 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:19Z","lastTransitionTime":"2026-01-23T08:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.996289 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.996342 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.996356 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.996380 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:19 crc kubenswrapper[4860]: I0123 08:15:19.996393 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:19Z","lastTransitionTime":"2026-01-23T08:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.097892 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.097922 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.097931 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.097944 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.097954 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:20Z","lastTransitionTime":"2026-01-23T08:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.172043 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4k855" event={"ID":"2e33feb5-d4ac-4ca7-96ce-260dbd32e192","Type":"ContainerStarted","Data":"e1898b926655e48c376e87fe521526031f07074024bcf29a2a2da55531af16a3"} Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.172899 4860 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.199759 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.199784 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.199793 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.199803 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.199812 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:20Z","lastTransitionTime":"2026-01-23T08:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.303825 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.303876 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.303887 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.304319 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.304343 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:20Z","lastTransitionTime":"2026-01-23T08:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.406463 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.406540 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.406560 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.406587 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.406605 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:20Z","lastTransitionTime":"2026-01-23T08:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.509772 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.509830 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.509846 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.509868 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.509884 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:20Z","lastTransitionTime":"2026-01-23T08:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.600280 4860 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.612061 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.612103 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.612120 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.612136 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.612149 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:20Z","lastTransitionTime":"2026-01-23T08:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.619297 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:51:56.939019181 +0000 UTC Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.656674 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:20 crc kubenswrapper[4860]: E0123 08:15:20.656794 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.714345 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.714395 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.714405 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.714424 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.714433 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:20Z","lastTransitionTime":"2026-01-23T08:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.816605 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.816641 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.816651 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.816667 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.816678 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:20Z","lastTransitionTime":"2026-01-23T08:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.919097 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.919135 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.919145 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.919160 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:20 crc kubenswrapper[4860]: I0123 08:15:20.919174 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:20Z","lastTransitionTime":"2026-01-23T08:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.021184 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.021438 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.021449 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.021465 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.021474 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:21Z","lastTransitionTime":"2026-01-23T08:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.123641 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.123686 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.123696 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.123711 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.123724 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:21Z","lastTransitionTime":"2026-01-23T08:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.178149 4860 generic.go:334] "Generic (PLEG): container finished" podID="2e33feb5-d4ac-4ca7-96ce-260dbd32e192" containerID="e1898b926655e48c376e87fe521526031f07074024bcf29a2a2da55531af16a3" exitCode=0 Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.178219 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4k855" event={"ID":"2e33feb5-d4ac-4ca7-96ce-260dbd32e192","Type":"ContainerDied","Data":"e1898b926655e48c376e87fe521526031f07074024bcf29a2a2da55531af16a3"} Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.225740 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.225784 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.225795 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.225812 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.225823 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:21Z","lastTransitionTime":"2026-01-23T08:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.327704 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.327752 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.327763 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.327779 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.327793 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:21Z","lastTransitionTime":"2026-01-23T08:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.343900 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.344113 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:29.344080598 +0000 UTC m=+35.972130783 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.344233 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.344332 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.344495 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.344542 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.344551 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:29.344542409 +0000 UTC m=+35.972592594 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.344640 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:29.344622661 +0000 UTC m=+35.972672846 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.430672 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.430729 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.430741 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.430761 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.430773 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:21Z","lastTransitionTime":"2026-01-23T08:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.445528 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.445593 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.445649 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs\") pod \"network-metrics-daemon-vtsqg\" (UID: \"3416422c-d1ef-463b-a846-11d6ea9715c3\") " pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.445709 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.445734 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.445746 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.445794 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:29.445777306 +0000 UTC m=+36.073827481 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.445812 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.445844 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.445856 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.445863 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.445872 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs podName:3416422c-d1ef-463b-a846-11d6ea9715c3 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:25.445852199 +0000 UTC m=+32.073902424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs") pod "network-metrics-daemon-vtsqg" (UID: "3416422c-d1ef-463b-a846-11d6ea9715c3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.445890 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:29.44588339 +0000 UTC m=+36.073933575 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.534468 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.534540 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.534564 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.534599 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.534624 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:21Z","lastTransitionTime":"2026-01-23T08:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.620080 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:12:36.371937802 +0000 UTC Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.637326 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.637365 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.637376 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.637392 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.637405 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:21Z","lastTransitionTime":"2026-01-23T08:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.657737 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.657770 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.657797 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.658575 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.658750 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtsqg" podUID="3416422c-d1ef-463b-a846-11d6ea9715c3" Jan 23 08:15:21 crc kubenswrapper[4860]: E0123 08:15:21.658918 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.739542 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.739578 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.739587 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.739602 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.739613 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:21Z","lastTransitionTime":"2026-01-23T08:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.843692 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.843756 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.843788 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.843812 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.843856 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:21Z","lastTransitionTime":"2026-01-23T08:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.945821 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.945863 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.945875 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.945894 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:21 crc kubenswrapper[4860]: I0123 08:15:21.945906 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:21Z","lastTransitionTime":"2026-01-23T08:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.049446 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.049743 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.049754 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.049770 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.049782 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:22Z","lastTransitionTime":"2026-01-23T08:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.151919 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.151967 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.151979 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.152000 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.152031 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:22Z","lastTransitionTime":"2026-01-23T08:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.190456 4860 generic.go:334] "Generic (PLEG): container finished" podID="2e33feb5-d4ac-4ca7-96ce-260dbd32e192" containerID="ca75986654f3467244d59f433974bcd94c2c4674fb6dac8b9d8fb0beb01c5d53" exitCode=0 Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.190516 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4k855" event={"ID":"2e33feb5-d4ac-4ca7-96ce-260dbd32e192","Type":"ContainerDied","Data":"ca75986654f3467244d59f433974bcd94c2c4674fb6dac8b9d8fb0beb01c5d53"} Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.196762 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerStarted","Data":"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767"} Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.197175 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.197207 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.197220 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.222253 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.222317 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.238312 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" podStartSLOduration=8.238283955 podStartE2EDuration="8.238283955s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:22.23608302 +0000 UTC m=+28.864133215" watchObservedRunningTime="2026-01-23 08:15:22.238283955 +0000 UTC m=+28.866334170" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.255050 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.255087 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.255098 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.255114 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.255125 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:22Z","lastTransitionTime":"2026-01-23T08:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.357836 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.357864 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.357875 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.357889 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.357899 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:22Z","lastTransitionTime":"2026-01-23T08:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.459745 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.459813 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.459823 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.459842 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.459854 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:22Z","lastTransitionTime":"2026-01-23T08:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.562835 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.562904 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.562924 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.562947 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.562963 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:22Z","lastTransitionTime":"2026-01-23T08:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.620756 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:48:14.751396632 +0000 UTC Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.656743 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:22 crc kubenswrapper[4860]: E0123 08:15:22.656901 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.665351 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.665411 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.665429 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.665454 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.665471 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:22Z","lastTransitionTime":"2026-01-23T08:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.767903 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.768007 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.768069 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.768094 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.768112 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:22Z","lastTransitionTime":"2026-01-23T08:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.870496 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.870540 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.870549 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.870564 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.870573 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:22Z","lastTransitionTime":"2026-01-23T08:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.973841 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.973887 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.973898 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.973916 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:22 crc kubenswrapper[4860]: I0123 08:15:22.973927 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:22Z","lastTransitionTime":"2026-01-23T08:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.077077 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.077151 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.077175 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.077207 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.077231 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:23Z","lastTransitionTime":"2026-01-23T08:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.180713 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.180776 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.180786 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.180800 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.180809 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:23Z","lastTransitionTime":"2026-01-23T08:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.202472 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4k855" event={"ID":"2e33feb5-d4ac-4ca7-96ce-260dbd32e192","Type":"ContainerStarted","Data":"8610d3e7494ebf24c3ecb33cff6cb89f8d33fb9f58372b9f6457dc45f97262ae"} Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.223987 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-4k855" podStartSLOduration=9.223967605 podStartE2EDuration="9.223967605s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:23.223663787 +0000 UTC m=+29.851713972" watchObservedRunningTime="2026-01-23 08:15:23.223967605 +0000 UTC m=+29.852017790" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.282778 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.282813 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.282823 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.282840 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.282851 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:23Z","lastTransitionTime":"2026-01-23T08:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.385719 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.385780 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.385797 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.385822 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.385839 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:23Z","lastTransitionTime":"2026-01-23T08:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.488126 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.488165 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.488176 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.488192 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.488203 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:23Z","lastTransitionTime":"2026-01-23T08:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.590710 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.590943 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.591048 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.591157 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.591239 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:23Z","lastTransitionTime":"2026-01-23T08:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.621044 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:38:10.060719522 +0000 UTC Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.657413 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.657470 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:23 crc kubenswrapper[4860]: E0123 08:15:23.658367 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.658403 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:23 crc kubenswrapper[4860]: E0123 08:15:23.658560 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtsqg" podUID="3416422c-d1ef-463b-a846-11d6ea9715c3" Jan 23 08:15:23 crc kubenswrapper[4860]: E0123 08:15:23.658724 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.694211 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.694246 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.694256 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.694271 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.694281 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:23Z","lastTransitionTime":"2026-01-23T08:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.797185 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.797449 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.797527 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.797602 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.797682 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:23Z","lastTransitionTime":"2026-01-23T08:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.881234 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vtsqg"] Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.899640 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.899677 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.899721 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.899735 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:23 crc kubenswrapper[4860]: I0123 08:15:23.899743 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:23Z","lastTransitionTime":"2026-01-23T08:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.002373 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.002586 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.002594 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.002607 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.002615 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:24Z","lastTransitionTime":"2026-01-23T08:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.105253 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.105316 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.105338 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.105366 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.105388 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:24Z","lastTransitionTime":"2026-01-23T08:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.206168 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:24 crc kubenswrapper[4860]: E0123 08:15:24.206527 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtsqg" podUID="3416422c-d1ef-463b-a846-11d6ea9715c3" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.208352 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.208381 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.208420 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.208435 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.208445 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:24Z","lastTransitionTime":"2026-01-23T08:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.310288 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.310337 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.310353 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.310369 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.310381 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:24Z","lastTransitionTime":"2026-01-23T08:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.412497 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.412541 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.412551 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.412600 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.412615 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:24Z","lastTransitionTime":"2026-01-23T08:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.502109 4860 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.515204 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.515264 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.515285 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.515310 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.515328 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:24Z","lastTransitionTime":"2026-01-23T08:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.618383 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.618420 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.618429 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.618443 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.618451 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:24Z","lastTransitionTime":"2026-01-23T08:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.621852 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 18:07:30.725519525 +0000 UTC Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.657323 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:24 crc kubenswrapper[4860]: E0123 08:15:24.657504 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.721581 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.721642 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.721650 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.721665 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.721678 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:24Z","lastTransitionTime":"2026-01-23T08:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.824166 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.824208 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.824219 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.824235 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.824246 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:24Z","lastTransitionTime":"2026-01-23T08:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.927444 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.927527 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.927555 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.927584 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:24 crc kubenswrapper[4860]: I0123 08:15:24.927605 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:24Z","lastTransitionTime":"2026-01-23T08:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.030353 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.030412 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.030442 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.030465 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.030482 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:25Z","lastTransitionTime":"2026-01-23T08:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.133520 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.133592 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.133614 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.133647 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.133671 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:25Z","lastTransitionTime":"2026-01-23T08:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.237048 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.237097 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.237107 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.237128 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.237137 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:25Z","lastTransitionTime":"2026-01-23T08:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.339760 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.339807 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.339819 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.339835 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.339848 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:25Z","lastTransitionTime":"2026-01-23T08:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.442414 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.442464 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.442475 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.442493 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.442503 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:25Z","lastTransitionTime":"2026-01-23T08:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.486102 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs\") pod \"network-metrics-daemon-vtsqg\" (UID: \"3416422c-d1ef-463b-a846-11d6ea9715c3\") " pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:25 crc kubenswrapper[4860]: E0123 08:15:25.486233 4860 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:15:25 crc kubenswrapper[4860]: E0123 08:15:25.486301 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs podName:3416422c-d1ef-463b-a846-11d6ea9715c3 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:33.486282144 +0000 UTC m=+40.114332329 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs") pod "network-metrics-daemon-vtsqg" (UID: "3416422c-d1ef-463b-a846-11d6ea9715c3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.546200 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.546270 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.546289 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.546317 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.546334 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:25Z","lastTransitionTime":"2026-01-23T08:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.622442 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 12:39:32.282425736 +0000 UTC Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.648760 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.648831 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.648848 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.648872 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.648891 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:25Z","lastTransitionTime":"2026-01-23T08:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.657359 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.657362 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:25 crc kubenswrapper[4860]: E0123 08:15:25.657554 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vtsqg" podUID="3416422c-d1ef-463b-a846-11d6ea9715c3" Jan 23 08:15:25 crc kubenswrapper[4860]: E0123 08:15:25.657680 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.657377 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:25 crc kubenswrapper[4860]: E0123 08:15:25.657789 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.751392 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.751469 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.751486 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.751509 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.751527 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:25Z","lastTransitionTime":"2026-01-23T08:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.854494 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.854545 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.854568 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.854597 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.854620 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:25Z","lastTransitionTime":"2026-01-23T08:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.957281 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.957329 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.957341 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.957358 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:25 crc kubenswrapper[4860]: I0123 08:15:25.957370 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:25Z","lastTransitionTime":"2026-01-23T08:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.060903 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.060953 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.060971 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.060994 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.061050 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:26Z","lastTransitionTime":"2026-01-23T08:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.163496 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.163550 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.163563 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.163582 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.163595 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:26Z","lastTransitionTime":"2026-01-23T08:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.266342 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.266388 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.266423 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.266442 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.266454 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:26Z","lastTransitionTime":"2026-01-23T08:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.369265 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.369336 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.369353 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.369379 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.369396 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:26Z","lastTransitionTime":"2026-01-23T08:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.472567 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.472629 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.472649 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.472680 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.472697 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:26Z","lastTransitionTime":"2026-01-23T08:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.574971 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.575068 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.575114 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.575147 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.575171 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:26Z","lastTransitionTime":"2026-01-23T08:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.622702 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 17:00:12.258534067 +0000 UTC Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.657332 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:26 crc kubenswrapper[4860]: E0123 08:15:26.657547 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.677694 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.677767 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.677790 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.677819 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.677840 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:26Z","lastTransitionTime":"2026-01-23T08:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.981395 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.981692 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.981829 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.981941 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:26 crc kubenswrapper[4860]: I0123 08:15:26.982082 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:26Z","lastTransitionTime":"2026-01-23T08:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.075011 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.075352 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.075488 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.075679 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.075839 4860 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:15:27Z","lastTransitionTime":"2026-01-23T08:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.098715 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.098767 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.098784 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.098806 4860 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.098933 4860 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.135624 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.136315 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.139356 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.139658 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.139958 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.142136 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.171928 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hdn58"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.172444 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.174366 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2g9xr"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.175151 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.186875 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.187173 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.187178 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.187277 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.187317 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.187317 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.187433 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.194253 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.194479 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.194497 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.194844 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.195052 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.197489 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fzzcx"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.197945 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.198142 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.198417 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.198505 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fb64k"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.198819 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fb64k" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.199182 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.199658 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.200148 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.200238 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.200736 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.203160 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8khjj"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.203586 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.204446 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.204574 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.204709 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.204944 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/25973908-6ad1-4e3b-a493-de9f5baef4e8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2g9xr\" (UID: \"25973908-6ad1-4e3b-a493-de9f5baef4e8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205006 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4caa2617-25ed-41f1-ab8a-4b8672a6cc54-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z5dfl\" (UID: \"4caa2617-25ed-41f1-ab8a-4b8672a6cc54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205041 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205055 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4caa2617-25ed-41f1-ab8a-4b8672a6cc54-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z5dfl\" (UID: \"4caa2617-25ed-41f1-ab8a-4b8672a6cc54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205078 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hdn58\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205105 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/25973908-6ad1-4e3b-a493-de9f5baef4e8-images\") pod \"machine-api-operator-5694c8668f-2g9xr\" (UID: \"25973908-6ad1-4e3b-a493-de9f5baef4e8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205128 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205148 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25973908-6ad1-4e3b-a493-de9f5baef4e8-config\") pod \"machine-api-operator-5694c8668f-2g9xr\" (UID: \"25973908-6ad1-4e3b-a493-de9f5baef4e8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205180 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205193 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq276\" (UniqueName: \"kubernetes.io/projected/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-kube-api-access-nq276\") pod \"controller-manager-879f6c89f-hdn58\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205226 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4caa2617-25ed-41f1-ab8a-4b8672a6cc54-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z5dfl\" (UID: \"4caa2617-25ed-41f1-ab8a-4b8672a6cc54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205274 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzf7c\" (UniqueName: \"kubernetes.io/projected/25973908-6ad1-4e3b-a493-de9f5baef4e8-kube-api-access-rzf7c\") pod \"machine-api-operator-5694c8668f-2g9xr\" (UID: \"25973908-6ad1-4e3b-a493-de9f5baef4e8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205302 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4caa2617-25ed-41f1-ab8a-4b8672a6cc54-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z5dfl\" (UID: \"4caa2617-25ed-41f1-ab8a-4b8672a6cc54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205332 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4caa2617-25ed-41f1-ab8a-4b8672a6cc54-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z5dfl\" (UID: \"4caa2617-25ed-41f1-ab8a-4b8672a6cc54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205422 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-client-ca\") pod \"controller-manager-879f6c89f-hdn58\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205539 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-config\") pod \"controller-manager-879f6c89f-hdn58\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205586 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-serving-cert\") pod \"controller-manager-879f6c89f-hdn58\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205904 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.205947 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.206071 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.206228 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.206259 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.206435 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.206906 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.206485 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.206600 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.206606 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.206687 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.207241 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.207286 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.207729 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.207928 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.208112 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.207576 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.207619 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.207665 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.209268 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.209821 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.210248 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.210560 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.210809 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.210950 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.211166 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.211403 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.211280 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.211335 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.211381 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.212777 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.211619 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.211660 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.211664 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.211712 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.211717 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.211885 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.212603 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.213480 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.213719 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.213864 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.213880 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.214100 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.214227 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.214543 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.214646 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-vtj8m"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.214854 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.215082 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.215186 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.215598 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.215882 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.223119 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.224718 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.225737 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.226637 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rpmlf"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.226995 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.227971 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.228295 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.231156 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.232775 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.233720 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6gqph"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.234250 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6gqph" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.237568 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.237931 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.237976 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.238068 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.238337 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.238538 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.238616 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.238694 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.238859 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.239069 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.239096 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.239127 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.239199 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.239215 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.239259 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.239080 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.239314 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.239385 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.239405 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.239451 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.239557 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.239664 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.241312 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.241494 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-nltcq"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.241673 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.241849 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.243089 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-nltcq" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.243408 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.243846 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2g9xr"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.245085 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.246696 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.248329 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.256718 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.259299 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.277554 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.297628 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.306427 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/25973908-6ad1-4e3b-a493-de9f5baef4e8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2g9xr\" (UID: \"25973908-6ad1-4e3b-a493-de9f5baef4e8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.306499 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa868339-9c74-409d-abba-2950409b3918-serving-cert\") pod \"openshift-config-operator-7777fb866f-6w4d9\" (UID: \"aa868339-9c74-409d-abba-2950409b3918\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.306529 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.306550 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12330295-b7da-48bc-ad66-f382f3eedace-service-ca-bundle\") pod \"authentication-operator-69f744f599-rpmlf\" (UID: \"12330295-b7da-48bc-ad66-f382f3eedace\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.306629 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4caa2617-25ed-41f1-ab8a-4b8672a6cc54-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z5dfl\" (UID: \"4caa2617-25ed-41f1-ab8a-4b8672a6cc54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.306667 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7crw\" (UniqueName: \"kubernetes.io/projected/aa868339-9c74-409d-abba-2950409b3918-kube-api-access-g7crw\") pod \"openshift-config-operator-7777fb866f-6w4d9\" (UID: \"aa868339-9c74-409d-abba-2950409b3918\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.306701 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12330295-b7da-48bc-ad66-f382f3eedace-serving-cert\") pod \"authentication-operator-69f744f599-rpmlf\" (UID: \"12330295-b7da-48bc-ad66-f382f3eedace\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.306733 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/25973908-6ad1-4e3b-a493-de9f5baef4e8-images\") pod \"machine-api-operator-5694c8668f-2g9xr\" (UID: \"25973908-6ad1-4e3b-a493-de9f5baef4e8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.306756 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hdn58\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.306785 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.306839 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.306874 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/72589342-3cb8-4e43-bad0-f1726f70d77a-oauth-serving-cert\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.306897 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307438 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307464 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dww6l\" (UniqueName: \"kubernetes.io/projected/3ad4c125-6200-4d28-aeb0-8a0390508c91-kube-api-access-dww6l\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307483 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/602a7467-b6cc-41e4-acd2-22f69df313e9-machine-approver-tls\") pod \"machine-approver-56656f9798-nsbwd\" (UID: \"602a7467-b6cc-41e4-acd2-22f69df313e9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307506 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq276\" (UniqueName: \"kubernetes.io/projected/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-kube-api-access-nq276\") pod \"controller-manager-879f6c89f-hdn58\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307527 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d22a61a3-fe8c-43f0-a6fc-16e5da78cd14-trusted-ca\") pod \"console-operator-58897d9998-fb64k\" (UID: \"d22a61a3-fe8c-43f0-a6fc-16e5da78cd14\") " pod="openshift-console-operator/console-operator-58897d9998-fb64k" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307554 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-etcd-client\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307574 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-str9s\" (UniqueName: \"kubernetes.io/projected/8149687b-dd22-4d3d-998c-efd2692db14b-kube-api-access-str9s\") pod \"cluster-samples-operator-665b6dd947-6gqph\" (UID: \"8149687b-dd22-4d3d-998c-efd2692db14b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6gqph" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307591 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3615b8d9-7529-46e3-9320-1e8a70ced9a5-audit\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307608 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3615b8d9-7529-46e3-9320-1e8a70ced9a5-serving-cert\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307628 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4caa2617-25ed-41f1-ab8a-4b8672a6cc54-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z5dfl\" (UID: \"4caa2617-25ed-41f1-ab8a-4b8672a6cc54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307645 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307663 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-w2fd5\" (UID: \"9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307705 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307730 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3615b8d9-7529-46e3-9320-1e8a70ced9a5-etcd-serving-ca\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307722 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4caa2617-25ed-41f1-ab8a-4b8672a6cc54-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z5dfl\" (UID: \"4caa2617-25ed-41f1-ab8a-4b8672a6cc54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307749 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ad4c125-6200-4d28-aeb0-8a0390508c91-audit-dir\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307798 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307820 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686eac92-c672-4c4d-bf80-8e47a557a52c-client-ca\") pod \"route-controller-manager-6576b87f9c-m5qwv\" (UID: \"686eac92-c672-4c4d-bf80-8e47a557a52c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307840 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12330295-b7da-48bc-ad66-f382f3eedace-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rpmlf\" (UID: \"12330295-b7da-48bc-ad66-f382f3eedace\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307858 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/25973908-6ad1-4e3b-a493-de9f5baef4e8-images\") pod \"machine-api-operator-5694c8668f-2g9xr\" (UID: \"25973908-6ad1-4e3b-a493-de9f5baef4e8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307879 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3615b8d9-7529-46e3-9320-1e8a70ced9a5-encryption-config\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307923 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz74s\" (UniqueName: \"kubernetes.io/projected/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-kube-api-access-qz74s\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307955 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4caa2617-25ed-41f1-ab8a-4b8672a6cc54-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z5dfl\" (UID: \"4caa2617-25ed-41f1-ab8a-4b8672a6cc54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307975 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4caa2617-25ed-41f1-ab8a-4b8672a6cc54-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z5dfl\" (UID: \"4caa2617-25ed-41f1-ab8a-4b8672a6cc54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.307993 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-audit-policies\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308035 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hdn58\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308114 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd6lb\" (UniqueName: \"kubernetes.io/projected/3615b8d9-7529-46e3-9320-1e8a70ced9a5-kube-api-access-rd6lb\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308212 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d492e4a-90c9-448c-b02c-e7b286055cd4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tqw5v\" (UID: \"5d492e4a-90c9-448c-b02c-e7b286055cd4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308267 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nq64\" (UniqueName: \"kubernetes.io/projected/72589342-3cb8-4e43-bad0-f1726f70d77a-kube-api-access-4nq64\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308307 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-w2fd5\" (UID: \"9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308332 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/72589342-3cb8-4e43-bad0-f1726f70d77a-service-ca\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308350 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72589342-3cb8-4e43-bad0-f1726f70d77a-trusted-ca-bundle\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308370 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-config\") pod \"controller-manager-879f6c89f-hdn58\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308386 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-serving-cert\") pod \"controller-manager-879f6c89f-hdn58\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308404 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308419 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686eac92-c672-4c4d-bf80-8e47a557a52c-config\") pod \"route-controller-manager-6576b87f9c-m5qwv\" (UID: \"686eac92-c672-4c4d-bf80-8e47a557a52c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308433 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/72589342-3cb8-4e43-bad0-f1726f70d77a-console-serving-cert\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308461 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3615b8d9-7529-46e3-9320-1e8a70ced9a5-audit-dir\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308479 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-audit-dir\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308495 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jj99\" (UniqueName: \"kubernetes.io/projected/d22a61a3-fe8c-43f0-a6fc-16e5da78cd14-kube-api-access-7jj99\") pod \"console-operator-58897d9998-fb64k\" (UID: \"d22a61a3-fe8c-43f0-a6fc-16e5da78cd14\") " pod="openshift-console-operator/console-operator-58897d9998-fb64k" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308511 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308532 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/602a7467-b6cc-41e4-acd2-22f69df313e9-auth-proxy-config\") pod \"machine-approver-56656f9798-nsbwd\" (UID: \"602a7467-b6cc-41e4-acd2-22f69df313e9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308558 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/602a7467-b6cc-41e4-acd2-22f69df313e9-config\") pod \"machine-approver-56656f9798-nsbwd\" (UID: \"602a7467-b6cc-41e4-acd2-22f69df313e9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308622 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs4rr\" (UniqueName: \"kubernetes.io/projected/1842f8a2-d295-4d71-be98-450d5abc2c08-kube-api-access-vs4rr\") pod \"openshift-apiserver-operator-796bbdcf4f-6sllw\" (UID: \"1842f8a2-d295-4d71-be98-450d5abc2c08\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308662 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-serving-cert\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308688 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308710 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12330295-b7da-48bc-ad66-f382f3eedace-config\") pod \"authentication-operator-69f744f599-rpmlf\" (UID: \"12330295-b7da-48bc-ad66-f382f3eedace\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308729 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4caa2617-25ed-41f1-ab8a-4b8672a6cc54-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z5dfl\" (UID: \"4caa2617-25ed-41f1-ab8a-4b8672a6cc54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308747 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vn8d\" (UniqueName: \"kubernetes.io/projected/686eac92-c672-4c4d-bf80-8e47a557a52c-kube-api-access-8vn8d\") pod \"route-controller-manager-6576b87f9c-m5qwv\" (UID: \"686eac92-c672-4c4d-bf80-8e47a557a52c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308766 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308785 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2wqg\" (UniqueName: \"kubernetes.io/projected/12330295-b7da-48bc-ad66-f382f3eedace-kube-api-access-c2wqg\") pod \"authentication-operator-69f744f599-rpmlf\" (UID: \"12330295-b7da-48bc-ad66-f382f3eedace\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308819 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67vbp\" (UniqueName: \"kubernetes.io/projected/602a7467-b6cc-41e4-acd2-22f69df313e9-kube-api-access-67vbp\") pod \"machine-approver-56656f9798-nsbwd\" (UID: \"602a7467-b6cc-41e4-acd2-22f69df313e9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308834 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3615b8d9-7529-46e3-9320-1e8a70ced9a5-node-pullsecrets\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308843 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4caa2617-25ed-41f1-ab8a-4b8672a6cc54-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z5dfl\" (UID: \"4caa2617-25ed-41f1-ab8a-4b8672a6cc54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308865 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbt9v\" (UniqueName: \"kubernetes.io/projected/966345d1-b077-4fab-89b6-ca2830cbe04a-kube-api-access-tbt9v\") pod \"downloads-7954f5f757-nltcq\" (UID: \"966345d1-b077-4fab-89b6-ca2830cbe04a\") " pod="openshift-console/downloads-7954f5f757-nltcq" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308905 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d492e4a-90c9-448c-b02c-e7b286055cd4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tqw5v\" (UID: \"5d492e4a-90c9-448c-b02c-e7b286055cd4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308939 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25973908-6ad1-4e3b-a493-de9f5baef4e8-config\") pod \"machine-api-operator-5694c8668f-2g9xr\" (UID: \"25973908-6ad1-4e3b-a493-de9f5baef4e8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308956 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4caa2617-25ed-41f1-ab8a-4b8672a6cc54-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z5dfl\" (UID: \"4caa2617-25ed-41f1-ab8a-4b8672a6cc54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.308969 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3615b8d9-7529-46e3-9320-1e8a70ced9a5-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309122 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3615b8d9-7529-46e3-9320-1e8a70ced9a5-config\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309175 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686eac92-c672-4c4d-bf80-8e47a557a52c-serving-cert\") pod \"route-controller-manager-6576b87f9c-m5qwv\" (UID: \"686eac92-c672-4c4d-bf80-8e47a557a52c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309224 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-encryption-config\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309242 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3615b8d9-7529-46e3-9320-1e8a70ced9a5-etcd-client\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309315 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd5gs\" (UniqueName: \"kubernetes.io/projected/9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9-kube-api-access-zd5gs\") pod \"openshift-controller-manager-operator-756b6f6bc6-w2fd5\" (UID: \"9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309365 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22a61a3-fe8c-43f0-a6fc-16e5da78cd14-config\") pod \"console-operator-58897d9998-fb64k\" (UID: \"d22a61a3-fe8c-43f0-a6fc-16e5da78cd14\") " pod="openshift-console-operator/console-operator-58897d9998-fb64k" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309388 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-audit-policies\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309407 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3615b8d9-7529-46e3-9320-1e8a70ced9a5-image-import-ca\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309441 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzf7c\" (UniqueName: \"kubernetes.io/projected/25973908-6ad1-4e3b-a493-de9f5baef4e8-kube-api-access-rzf7c\") pod \"machine-api-operator-5694c8668f-2g9xr\" (UID: \"25973908-6ad1-4e3b-a493-de9f5baef4e8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309469 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309523 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8149687b-dd22-4d3d-998c-efd2692db14b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6gqph\" (UID: \"8149687b-dd22-4d3d-998c-efd2692db14b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6gqph" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309529 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25973908-6ad1-4e3b-a493-de9f5baef4e8-config\") pod \"machine-api-operator-5694c8668f-2g9xr\" (UID: \"25973908-6ad1-4e3b-a493-de9f5baef4e8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309536 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-config\") pod \"controller-manager-879f6c89f-hdn58\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309542 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1842f8a2-d295-4d71-be98-450d5abc2c08-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6sllw\" (UID: \"1842f8a2-d295-4d71-be98-450d5abc2c08\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309585 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-client-ca\") pod \"controller-manager-879f6c89f-hdn58\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309603 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/aa868339-9c74-409d-abba-2950409b3918-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6w4d9\" (UID: \"aa868339-9c74-409d-abba-2950409b3918\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309619 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/72589342-3cb8-4e43-bad0-f1726f70d77a-console-oauth-config\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309643 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1842f8a2-d295-4d71-be98-450d5abc2c08-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6sllw\" (UID: \"1842f8a2-d295-4d71-be98-450d5abc2c08\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309665 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d22a61a3-fe8c-43f0-a6fc-16e5da78cd14-serving-cert\") pod \"console-operator-58897d9998-fb64k\" (UID: \"d22a61a3-fe8c-43f0-a6fc-16e5da78cd14\") " pod="openshift-console-operator/console-operator-58897d9998-fb64k" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309686 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d492e4a-90c9-448c-b02c-e7b286055cd4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tqw5v\" (UID: \"5d492e4a-90c9-448c-b02c-e7b286055cd4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309707 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/72589342-3cb8-4e43-bad0-f1726f70d77a-console-config\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.309730 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4cj4\" (UniqueName: \"kubernetes.io/projected/5d492e4a-90c9-448c-b02c-e7b286055cd4-kube-api-access-j4cj4\") pod \"cluster-image-registry-operator-dc59b4c8b-tqw5v\" (UID: \"5d492e4a-90c9-448c-b02c-e7b286055cd4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.310284 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-client-ca\") pod \"controller-manager-879f6c89f-hdn58\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.311605 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4caa2617-25ed-41f1-ab8a-4b8672a6cc54-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z5dfl\" (UID: \"4caa2617-25ed-41f1-ab8a-4b8672a6cc54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.311697 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/25973908-6ad1-4e3b-a493-de9f5baef4e8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2g9xr\" (UID: \"25973908-6ad1-4e3b-a493-de9f5baef4e8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.312447 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-serving-cert\") pod \"controller-manager-879f6c89f-hdn58\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.323143 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.337877 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.358695 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.377963 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411075 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-w2fd5\" (UID: \"9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411194 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nq64\" (UniqueName: \"kubernetes.io/projected/72589342-3cb8-4e43-bad0-f1726f70d77a-kube-api-access-4nq64\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411234 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/72589342-3cb8-4e43-bad0-f1726f70d77a-service-ca\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411266 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72589342-3cb8-4e43-bad0-f1726f70d77a-trusted-ca-bundle\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411303 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411333 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686eac92-c672-4c4d-bf80-8e47a557a52c-config\") pod \"route-controller-manager-6576b87f9c-m5qwv\" (UID: \"686eac92-c672-4c4d-bf80-8e47a557a52c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411365 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/72589342-3cb8-4e43-bad0-f1726f70d77a-console-serving-cert\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411415 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3615b8d9-7529-46e3-9320-1e8a70ced9a5-audit-dir\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411447 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/602a7467-b6cc-41e4-acd2-22f69df313e9-auth-proxy-config\") pod \"machine-approver-56656f9798-nsbwd\" (UID: \"602a7467-b6cc-41e4-acd2-22f69df313e9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411479 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/602a7467-b6cc-41e4-acd2-22f69df313e9-config\") pod \"machine-approver-56656f9798-nsbwd\" (UID: \"602a7467-b6cc-41e4-acd2-22f69df313e9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411510 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-audit-dir\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411545 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jj99\" (UniqueName: \"kubernetes.io/projected/d22a61a3-fe8c-43f0-a6fc-16e5da78cd14-kube-api-access-7jj99\") pod \"console-operator-58897d9998-fb64k\" (UID: \"d22a61a3-fe8c-43f0-a6fc-16e5da78cd14\") " pod="openshift-console-operator/console-operator-58897d9998-fb64k" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411581 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411652 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs4rr\" (UniqueName: \"kubernetes.io/projected/1842f8a2-d295-4d71-be98-450d5abc2c08-kube-api-access-vs4rr\") pod \"openshift-apiserver-operator-796bbdcf4f-6sllw\" (UID: \"1842f8a2-d295-4d71-be98-450d5abc2c08\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411692 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411724 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12330295-b7da-48bc-ad66-f382f3eedace-config\") pod \"authentication-operator-69f744f599-rpmlf\" (UID: \"12330295-b7da-48bc-ad66-f382f3eedace\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411757 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-serving-cert\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411790 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67vbp\" (UniqueName: \"kubernetes.io/projected/602a7467-b6cc-41e4-acd2-22f69df313e9-kube-api-access-67vbp\") pod \"machine-approver-56656f9798-nsbwd\" (UID: \"602a7467-b6cc-41e4-acd2-22f69df313e9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411827 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3615b8d9-7529-46e3-9320-1e8a70ced9a5-node-pullsecrets\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411862 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbt9v\" (UniqueName: \"kubernetes.io/projected/966345d1-b077-4fab-89b6-ca2830cbe04a-kube-api-access-tbt9v\") pod \"downloads-7954f5f757-nltcq\" (UID: \"966345d1-b077-4fab-89b6-ca2830cbe04a\") " pod="openshift-console/downloads-7954f5f757-nltcq" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411896 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vn8d\" (UniqueName: \"kubernetes.io/projected/686eac92-c672-4c4d-bf80-8e47a557a52c-kube-api-access-8vn8d\") pod \"route-controller-manager-6576b87f9c-m5qwv\" (UID: \"686eac92-c672-4c4d-bf80-8e47a557a52c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411929 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.411968 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2wqg\" (UniqueName: \"kubernetes.io/projected/12330295-b7da-48bc-ad66-f382f3eedace-kube-api-access-c2wqg\") pod \"authentication-operator-69f744f599-rpmlf\" (UID: \"12330295-b7da-48bc-ad66-f382f3eedace\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.412040 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.412057 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d492e4a-90c9-448c-b02c-e7b286055cd4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tqw5v\" (UID: \"5d492e4a-90c9-448c-b02c-e7b286055cd4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.412251 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/72589342-3cb8-4e43-bad0-f1726f70d77a-service-ca\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.412845 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3615b8d9-7529-46e3-9320-1e8a70ced9a5-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.412900 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3615b8d9-7529-46e3-9320-1e8a70ced9a5-config\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.412921 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686eac92-c672-4c4d-bf80-8e47a557a52c-serving-cert\") pod \"route-controller-manager-6576b87f9c-m5qwv\" (UID: \"686eac92-c672-4c4d-bf80-8e47a557a52c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.412935 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12330295-b7da-48bc-ad66-f382f3eedace-config\") pod \"authentication-operator-69f744f599-rpmlf\" (UID: \"12330295-b7da-48bc-ad66-f382f3eedace\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.412939 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-encryption-config\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.412994 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3615b8d9-7529-46e3-9320-1e8a70ced9a5-etcd-client\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.413037 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-audit-policies\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.413057 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3615b8d9-7529-46e3-9320-1e8a70ced9a5-image-import-ca\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.413076 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd5gs\" (UniqueName: \"kubernetes.io/projected/9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9-kube-api-access-zd5gs\") pod \"openshift-controller-manager-operator-756b6f6bc6-w2fd5\" (UID: \"9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.413094 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22a61a3-fe8c-43f0-a6fc-16e5da78cd14-config\") pod \"console-operator-58897d9998-fb64k\" (UID: \"d22a61a3-fe8c-43f0-a6fc-16e5da78cd14\") " pod="openshift-console-operator/console-operator-58897d9998-fb64k" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.413118 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/602a7467-b6cc-41e4-acd2-22f69df313e9-auth-proxy-config\") pod \"machine-approver-56656f9798-nsbwd\" (UID: \"602a7467-b6cc-41e4-acd2-22f69df313e9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.413192 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/602a7467-b6cc-41e4-acd2-22f69df313e9-config\") pod \"machine-approver-56656f9798-nsbwd\" (UID: \"602a7467-b6cc-41e4-acd2-22f69df313e9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.413198 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8149687b-dd22-4d3d-998c-efd2692db14b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6gqph\" (UID: \"8149687b-dd22-4d3d-998c-efd2692db14b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6gqph" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.413243 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1842f8a2-d295-4d71-be98-450d5abc2c08-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6sllw\" (UID: \"1842f8a2-d295-4d71-be98-450d5abc2c08\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.413292 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.413346 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/72589342-3cb8-4e43-bad0-f1726f70d77a-console-oauth-config\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.413392 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/aa868339-9c74-409d-abba-2950409b3918-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6w4d9\" (UID: \"aa868339-9c74-409d-abba-2950409b3918\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.413416 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1842f8a2-d295-4d71-be98-450d5abc2c08-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6sllw\" (UID: \"1842f8a2-d295-4d71-be98-450d5abc2c08\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.413410 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3615b8d9-7529-46e3-9320-1e8a70ced9a5-audit-dir\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.413498 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3615b8d9-7529-46e3-9320-1e8a70ced9a5-node-pullsecrets\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.413855 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/aa868339-9c74-409d-abba-2950409b3918-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6w4d9\" (UID: \"aa868339-9c74-409d-abba-2950409b3918\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.414193 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686eac92-c672-4c4d-bf80-8e47a557a52c-config\") pod \"route-controller-manager-6576b87f9c-m5qwv\" (UID: \"686eac92-c672-4c4d-bf80-8e47a557a52c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.414190 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/72589342-3cb8-4e43-bad0-f1726f70d77a-console-config\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.414238 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d22a61a3-fe8c-43f0-a6fc-16e5da78cd14-serving-cert\") pod \"console-operator-58897d9998-fb64k\" (UID: \"d22a61a3-fe8c-43f0-a6fc-16e5da78cd14\") " pod="openshift-console-operator/console-operator-58897d9998-fb64k" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.414257 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d492e4a-90c9-448c-b02c-e7b286055cd4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tqw5v\" (UID: \"5d492e4a-90c9-448c-b02c-e7b286055cd4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.415377 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d22a61a3-fe8c-43f0-a6fc-16e5da78cd14-config\") pod \"console-operator-58897d9998-fb64k\" (UID: \"d22a61a3-fe8c-43f0-a6fc-16e5da78cd14\") " pod="openshift-console-operator/console-operator-58897d9998-fb64k" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.415675 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/72589342-3cb8-4e43-bad0-f1726f70d77a-console-config\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.415749 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-audit-dir\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.416169 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72589342-3cb8-4e43-bad0-f1726f70d77a-trusted-ca-bundle\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.416419 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d492e4a-90c9-448c-b02c-e7b286055cd4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tqw5v\" (UID: \"5d492e4a-90c9-448c-b02c-e7b286055cd4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.416460 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-audit-policies\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.416783 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1842f8a2-d295-4d71-be98-450d5abc2c08-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6sllw\" (UID: \"1842f8a2-d295-4d71-be98-450d5abc2c08\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.417494 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/72589342-3cb8-4e43-bad0-f1726f70d77a-console-oauth-config\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.417643 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3615b8d9-7529-46e3-9320-1e8a70ced9a5-config\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.417768 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4cj4\" (UniqueName: \"kubernetes.io/projected/5d492e4a-90c9-448c-b02c-e7b286055cd4-kube-api-access-j4cj4\") pod \"cluster-image-registry-operator-dc59b4c8b-tqw5v\" (UID: \"5d492e4a-90c9-448c-b02c-e7b286055cd4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.417841 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa868339-9c74-409d-abba-2950409b3918-serving-cert\") pod \"openshift-config-operator-7777fb866f-6w4d9\" (UID: \"aa868339-9c74-409d-abba-2950409b3918\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.417923 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.418113 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8149687b-dd22-4d3d-998c-efd2692db14b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6gqph\" (UID: \"8149687b-dd22-4d3d-998c-efd2692db14b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6gqph" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.418532 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.419335 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.419643 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3615b8d9-7529-46e3-9320-1e8a70ced9a5-image-import-ca\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420152 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-w2fd5\" (UID: \"9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420281 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12330295-b7da-48bc-ad66-f382f3eedace-service-ca-bundle\") pod \"authentication-operator-69f744f599-rpmlf\" (UID: \"12330295-b7da-48bc-ad66-f382f3eedace\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420321 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7crw\" (UniqueName: \"kubernetes.io/projected/aa868339-9c74-409d-abba-2950409b3918-kube-api-access-g7crw\") pod \"openshift-config-operator-7777fb866f-6w4d9\" (UID: \"aa868339-9c74-409d-abba-2950409b3918\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420344 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12330295-b7da-48bc-ad66-f382f3eedace-serving-cert\") pod \"authentication-operator-69f744f599-rpmlf\" (UID: \"12330295-b7da-48bc-ad66-f382f3eedace\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420365 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420386 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420403 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/72589342-3cb8-4e43-bad0-f1726f70d77a-oauth-serving-cert\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420420 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dww6l\" (UniqueName: \"kubernetes.io/projected/3ad4c125-6200-4d28-aeb0-8a0390508c91-kube-api-access-dww6l\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420437 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/602a7467-b6cc-41e4-acd2-22f69df313e9-machine-approver-tls\") pod \"machine-approver-56656f9798-nsbwd\" (UID: \"602a7467-b6cc-41e4-acd2-22f69df313e9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420456 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420471 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420501 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d22a61a3-fe8c-43f0-a6fc-16e5da78cd14-trusted-ca\") pod \"console-operator-58897d9998-fb64k\" (UID: \"d22a61a3-fe8c-43f0-a6fc-16e5da78cd14\") " pod="openshift-console-operator/console-operator-58897d9998-fb64k" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420517 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3615b8d9-7529-46e3-9320-1e8a70ced9a5-serving-cert\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420534 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-etcd-client\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420553 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-str9s\" (UniqueName: \"kubernetes.io/projected/8149687b-dd22-4d3d-998c-efd2692db14b-kube-api-access-str9s\") pod \"cluster-samples-operator-665b6dd947-6gqph\" (UID: \"8149687b-dd22-4d3d-998c-efd2692db14b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6gqph" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420574 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3615b8d9-7529-46e3-9320-1e8a70ced9a5-audit\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420592 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420609 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-w2fd5\" (UID: \"9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420636 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ad4c125-6200-4d28-aeb0-8a0390508c91-audit-dir\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420654 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420674 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420689 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3615b8d9-7529-46e3-9320-1e8a70ced9a5-etcd-serving-ca\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420705 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz74s\" (UniqueName: \"kubernetes.io/projected/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-kube-api-access-qz74s\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420722 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686eac92-c672-4c4d-bf80-8e47a557a52c-client-ca\") pod \"route-controller-manager-6576b87f9c-m5qwv\" (UID: \"686eac92-c672-4c4d-bf80-8e47a557a52c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420736 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12330295-b7da-48bc-ad66-f382f3eedace-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rpmlf\" (UID: \"12330295-b7da-48bc-ad66-f382f3eedace\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420751 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3615b8d9-7529-46e3-9320-1e8a70ced9a5-encryption-config\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420766 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd6lb\" (UniqueName: \"kubernetes.io/projected/3615b8d9-7529-46e3-9320-1e8a70ced9a5-kube-api-access-rd6lb\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420771 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686eac92-c672-4c4d-bf80-8e47a557a52c-serving-cert\") pod \"route-controller-manager-6576b87f9c-m5qwv\" (UID: \"686eac92-c672-4c4d-bf80-8e47a557a52c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420777 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12330295-b7da-48bc-ad66-f382f3eedace-service-ca-bundle\") pod \"authentication-operator-69f744f599-rpmlf\" (UID: \"12330295-b7da-48bc-ad66-f382f3eedace\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420781 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d492e4a-90c9-448c-b02c-e7b286055cd4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tqw5v\" (UID: \"5d492e4a-90c9-448c-b02c-e7b286055cd4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.420957 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-audit-policies\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.421277 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1842f8a2-d295-4d71-be98-450d5abc2c08-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6sllw\" (UID: \"1842f8a2-d295-4d71-be98-450d5abc2c08\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.422103 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-audit-policies\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.422648 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686eac92-c672-4c4d-bf80-8e47a557a52c-client-ca\") pod \"route-controller-manager-6576b87f9c-m5qwv\" (UID: \"686eac92-c672-4c4d-bf80-8e47a557a52c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.423286 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-serving-cert\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.423881 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3615b8d9-7529-46e3-9320-1e8a70ced9a5-audit\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.425078 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.425313 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.425368 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.426100 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq276\" (UniqueName: \"kubernetes.io/projected/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-kube-api-access-nq276\") pod \"controller-manager-879f6c89f-hdn58\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.426265 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/72589342-3cb8-4e43-bad0-f1726f70d77a-oauth-serving-cert\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.426698 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.426701 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa868339-9c74-409d-abba-2950409b3918-serving-cert\") pod \"openshift-config-operator-7777fb866f-6w4d9\" (UID: \"aa868339-9c74-409d-abba-2950409b3918\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.426836 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d22a61a3-fe8c-43f0-a6fc-16e5da78cd14-serving-cert\") pod \"console-operator-58897d9998-fb64k\" (UID: \"d22a61a3-fe8c-43f0-a6fc-16e5da78cd14\") " pod="openshift-console-operator/console-operator-58897d9998-fb64k" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.426936 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-encryption-config\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.427066 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3615b8d9-7529-46e3-9320-1e8a70ced9a5-etcd-client\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.427168 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ad4c125-6200-4d28-aeb0-8a0390508c91-audit-dir\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.427274 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/602a7467-b6cc-41e4-acd2-22f69df313e9-machine-approver-tls\") pod \"machine-approver-56656f9798-nsbwd\" (UID: \"602a7467-b6cc-41e4-acd2-22f69df313e9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.427592 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.427710 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d22a61a3-fe8c-43f0-a6fc-16e5da78cd14-trusted-ca\") pod \"console-operator-58897d9998-fb64k\" (UID: \"d22a61a3-fe8c-43f0-a6fc-16e5da78cd14\") " pod="openshift-console-operator/console-operator-58897d9998-fb64k" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.428346 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.428524 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-w2fd5\" (UID: \"9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.428537 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.429378 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12330295-b7da-48bc-ad66-f382f3eedace-serving-cert\") pod \"authentication-operator-69f744f599-rpmlf\" (UID: \"12330295-b7da-48bc-ad66-f382f3eedace\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.430327 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d492e4a-90c9-448c-b02c-e7b286055cd4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tqw5v\" (UID: \"5d492e4a-90c9-448c-b02c-e7b286055cd4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.430965 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.431911 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3615b8d9-7529-46e3-9320-1e8a70ced9a5-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.432385 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.432487 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.432969 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12330295-b7da-48bc-ad66-f382f3eedace-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rpmlf\" (UID: \"12330295-b7da-48bc-ad66-f382f3eedace\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.435223 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-etcd-client\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.435432 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3615b8d9-7529-46e3-9320-1e8a70ced9a5-encryption-config\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.435717 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3615b8d9-7529-46e3-9320-1e8a70ced9a5-serving-cert\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.436146 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/72589342-3cb8-4e43-bad0-f1726f70d77a-console-serving-cert\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.448274 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4caa2617-25ed-41f1-ab8a-4b8672a6cc54-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z5dfl\" (UID: \"4caa2617-25ed-41f1-ab8a-4b8672a6cc54\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.453840 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.455709 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzf7c\" (UniqueName: \"kubernetes.io/projected/25973908-6ad1-4e3b-a493-de9f5baef4e8-kube-api-access-rzf7c\") pod \"machine-api-operator-5694c8668f-2g9xr\" (UID: \"25973908-6ad1-4e3b-a493-de9f5baef4e8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" Jan 23 08:15:27 crc kubenswrapper[4860]: W0123 08:15:27.469139 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4caa2617_25ed_41f1_ab8a_4b8672a6cc54.slice/crio-139bbebb17314358719e4393b93ea3b09a586f9f2485b6a40662a8c21c9c92cb WatchSource:0}: Error finding container 139bbebb17314358719e4393b93ea3b09a586f9f2485b6a40662a8c21c9c92cb: Status 404 returned error can't find the container with id 139bbebb17314358719e4393b93ea3b09a586f9f2485b6a40662a8c21c9c92cb Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.474082 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nq64\" (UniqueName: \"kubernetes.io/projected/72589342-3cb8-4e43-bad0-f1726f70d77a-kube-api-access-4nq64\") pod \"console-f9d7485db-vtj8m\" (UID: \"72589342-3cb8-4e43-bad0-f1726f70d77a\") " pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.484730 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3615b8d9-7529-46e3-9320-1e8a70ced9a5-etcd-serving-ca\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.489884 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fsxq7"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.490454 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.491829 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.492152 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.498035 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hzk6q"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.498409 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.498741 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.500280 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.506720 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-xwk76"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.506850 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.507177 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-xwk76" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.507343 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbt9v\" (UniqueName: \"kubernetes.io/projected/966345d1-b077-4fab-89b6-ca2830cbe04a-kube-api-access-tbt9v\") pod \"downloads-7954f5f757-nltcq\" (UID: \"966345d1-b077-4fab-89b6-ca2830cbe04a\") " pod="openshift-console/downloads-7954f5f757-nltcq" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.507695 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.513195 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.515072 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kc8q5"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.525774 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8jdp4"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.526038 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.526951 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wzz6\" (UniqueName: \"kubernetes.io/projected/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-kube-api-access-5wzz6\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.526994 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9f9de0-0f93-4b24-8808-f322cea95b38-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kdjbl\" (UID: \"1e9f9de0-0f93-4b24-8808-f322cea95b38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527066 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/515bd73b-7a1b-414c-b189-25042a6049a3-webhook-cert\") pod \"packageserver-d55dfcdfc-ld8cj\" (UID: \"515bd73b-7a1b-414c-b189-25042a6049a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527103 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/094b44a1-e9bb-4b69-94ad-afa0f9f8179e-proxy-tls\") pod \"machine-config-controller-84d6567774-v9sjk\" (UID: \"094b44a1-e9bb-4b69-94ad-afa0f9f8179e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527136 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/094b44a1-e9bb-4b69-94ad-afa0f9f8179e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-v9sjk\" (UID: \"094b44a1-e9bb-4b69-94ad-afa0f9f8179e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527150 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527211 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a10289e9-f5eb-44a9-a95a-6c79fd3e4461-signing-cabundle\") pod \"service-ca-9c57cc56f-xwk76\" (UID: \"a10289e9-f5eb-44a9-a95a-6c79fd3e4461\") " pod="openshift-service-ca/service-ca-9c57cc56f-xwk76" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527311 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cswzs\" (UniqueName: \"kubernetes.io/projected/515bd73b-7a1b-414c-b189-25042a6049a3-kube-api-access-cswzs\") pod \"packageserver-d55dfcdfc-ld8cj\" (UID: \"515bd73b-7a1b-414c-b189-25042a6049a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527346 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-etcd-service-ca\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527368 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-etcd-client\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527491 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bcs7\" (UniqueName: \"kubernetes.io/projected/a10289e9-f5eb-44a9-a95a-6c79fd3e4461-kube-api-access-4bcs7\") pod \"service-ca-9c57cc56f-xwk76\" (UID: \"a10289e9-f5eb-44a9-a95a-6c79fd3e4461\") " pod="openshift-service-ca/service-ca-9c57cc56f-xwk76" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527544 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a10289e9-f5eb-44a9-a95a-6c79fd3e4461-signing-key\") pod \"service-ca-9c57cc56f-xwk76\" (UID: \"a10289e9-f5eb-44a9-a95a-6c79fd3e4461\") " pod="openshift-service-ca/service-ca-9c57cc56f-xwk76" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527585 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-config\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527608 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/515bd73b-7a1b-414c-b189-25042a6049a3-tmpfs\") pod \"packageserver-d55dfcdfc-ld8cj\" (UID: \"515bd73b-7a1b-414c-b189-25042a6049a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527703 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e9f9de0-0f93-4b24-8808-f322cea95b38-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kdjbl\" (UID: \"1e9f9de0-0f93-4b24-8808-f322cea95b38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527740 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/515bd73b-7a1b-414c-b189-25042a6049a3-apiservice-cert\") pod \"packageserver-d55dfcdfc-ld8cj\" (UID: \"515bd73b-7a1b-414c-b189-25042a6049a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527775 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbjmg\" (UniqueName: \"kubernetes.io/projected/1e9f9de0-0f93-4b24-8808-f322cea95b38-kube-api-access-jbjmg\") pod \"kube-storage-version-migrator-operator-b67b599dd-kdjbl\" (UID: \"1e9f9de0-0f93-4b24-8808-f322cea95b38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527933 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-etcd-ca\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.527963 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-serving-cert\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.528082 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqjtl\" (UniqueName: \"kubernetes.io/projected/094b44a1-e9bb-4b69-94ad-afa0f9f8179e-kube-api-access-xqjtl\") pod \"machine-config-controller-84d6567774-v9sjk\" (UID: \"094b44a1-e9bb-4b69-94ad-afa0f9f8179e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.528275 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.528411 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8jdp4" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.528713 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k9j8h"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.530322 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.530518 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.531761 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.532942 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.534396 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.535337 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs4rr\" (UniqueName: \"kubernetes.io/projected/1842f8a2-d295-4d71-be98-450d5abc2c08-kube-api-access-vs4rr\") pod \"openshift-apiserver-operator-796bbdcf4f-6sllw\" (UID: \"1842f8a2-d295-4d71-be98-450d5abc2c08\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.537085 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-dr4bl"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.538738 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lfmk9"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.540476 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dr4bl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.560776 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lfmk9" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.563245 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.563927 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7sm9"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.564644 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7sm9" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.568992 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.569865 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.571087 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67vbp\" (UniqueName: \"kubernetes.io/projected/602a7467-b6cc-41e4-acd2-22f69df313e9-kube-api-access-67vbp\") pod \"machine-approver-56656f9798-nsbwd\" (UID: \"602a7467-b6cc-41e4-acd2-22f69df313e9\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.580381 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.585206 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd5gs\" (UniqueName: \"kubernetes.io/projected/9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9-kube-api-access-zd5gs\") pod \"openshift-controller-manager-operator-756b6f6bc6-w2fd5\" (UID: \"9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.585962 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.587004 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.587402 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2wqg\" (UniqueName: \"kubernetes.io/projected/12330295-b7da-48bc-ad66-f382f3eedace-kube-api-access-c2wqg\") pod \"authentication-operator-69f744f599-rpmlf\" (UID: \"12330295-b7da-48bc-ad66-f382f3eedace\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.587455 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.587852 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.588255 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.595973 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-fzmlp"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.597587 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.601783 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-w9rkq"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.602933 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-w9rkq" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.603526 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.603723 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.605408 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dkzf5"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.606047 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.611849 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.614591 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-rlmgs"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.615252 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rlmgs" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.617134 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.617739 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.618605 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.619129 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vn8d\" (UniqueName: \"kubernetes.io/projected/686eac92-c672-4c4d-bf80-8e47a557a52c-kube-api-access-8vn8d\") pod \"route-controller-manager-6576b87f9c-m5qwv\" (UID: \"686eac92-c672-4c4d-bf80-8e47a557a52c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.619541 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rgzmr"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.619857 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.619937 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d492e4a-90c9-448c-b02c-e7b286055cd4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tqw5v\" (UID: \"5d492e4a-90c9-448c-b02c-e7b286055cd4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.621491 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rgzmr" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.621917 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.623227 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:26:50.725508693 +0000 UTC Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.623263 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.623631 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.624950 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vtj8m"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.627382 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fb64k"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629423 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a10289e9-f5eb-44a9-a95a-6c79fd3e4461-signing-key\") pod \"service-ca-9c57cc56f-xwk76\" (UID: \"a10289e9-f5eb-44a9-a95a-6c79fd3e4461\") " pod="openshift-service-ca/service-ca-9c57cc56f-xwk76" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629469 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-config\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629488 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/515bd73b-7a1b-414c-b189-25042a6049a3-tmpfs\") pod \"packageserver-d55dfcdfc-ld8cj\" (UID: \"515bd73b-7a1b-414c-b189-25042a6049a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629506 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e9f9de0-0f93-4b24-8808-f322cea95b38-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kdjbl\" (UID: \"1e9f9de0-0f93-4b24-8808-f322cea95b38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629539 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/515bd73b-7a1b-414c-b189-25042a6049a3-apiservice-cert\") pod \"packageserver-d55dfcdfc-ld8cj\" (UID: \"515bd73b-7a1b-414c-b189-25042a6049a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629562 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbjmg\" (UniqueName: \"kubernetes.io/projected/1e9f9de0-0f93-4b24-8808-f322cea95b38-kube-api-access-jbjmg\") pod \"kube-storage-version-migrator-operator-b67b599dd-kdjbl\" (UID: \"1e9f9de0-0f93-4b24-8808-f322cea95b38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629586 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-serving-cert\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629601 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-etcd-ca\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629624 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqjtl\" (UniqueName: \"kubernetes.io/projected/094b44a1-e9bb-4b69-94ad-afa0f9f8179e-kube-api-access-xqjtl\") pod \"machine-config-controller-84d6567774-v9sjk\" (UID: \"094b44a1-e9bb-4b69-94ad-afa0f9f8179e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629666 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wzz6\" (UniqueName: \"kubernetes.io/projected/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-kube-api-access-5wzz6\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629681 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9f9de0-0f93-4b24-8808-f322cea95b38-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kdjbl\" (UID: \"1e9f9de0-0f93-4b24-8808-f322cea95b38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629709 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/515bd73b-7a1b-414c-b189-25042a6049a3-webhook-cert\") pod \"packageserver-d55dfcdfc-ld8cj\" (UID: \"515bd73b-7a1b-414c-b189-25042a6049a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629724 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/094b44a1-e9bb-4b69-94ad-afa0f9f8179e-proxy-tls\") pod \"machine-config-controller-84d6567774-v9sjk\" (UID: \"094b44a1-e9bb-4b69-94ad-afa0f9f8179e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629750 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/094b44a1-e9bb-4b69-94ad-afa0f9f8179e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-v9sjk\" (UID: \"094b44a1-e9bb-4b69-94ad-afa0f9f8179e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629778 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a10289e9-f5eb-44a9-a95a-6c79fd3e4461-signing-cabundle\") pod \"service-ca-9c57cc56f-xwk76\" (UID: \"a10289e9-f5eb-44a9-a95a-6c79fd3e4461\") " pod="openshift-service-ca/service-ca-9c57cc56f-xwk76" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629794 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cswzs\" (UniqueName: \"kubernetes.io/projected/515bd73b-7a1b-414c-b189-25042a6049a3-kube-api-access-cswzs\") pod \"packageserver-d55dfcdfc-ld8cj\" (UID: \"515bd73b-7a1b-414c-b189-25042a6049a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629813 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-etcd-client\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629849 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-etcd-service-ca\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.629863 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bcs7\" (UniqueName: \"kubernetes.io/projected/a10289e9-f5eb-44a9-a95a-6c79fd3e4461-kube-api-access-4bcs7\") pod \"service-ca-9c57cc56f-xwk76\" (UID: \"a10289e9-f5eb-44a9-a95a-6c79fd3e4461\") " pod="openshift-service-ca/service-ca-9c57cc56f-xwk76" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.630396 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/515bd73b-7a1b-414c-b189-25042a6049a3-tmpfs\") pod \"packageserver-d55dfcdfc-ld8cj\" (UID: \"515bd73b-7a1b-414c-b189-25042a6049a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.631326 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/094b44a1-e9bb-4b69-94ad-afa0f9f8179e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-v9sjk\" (UID: \"094b44a1-e9bb-4b69-94ad-afa0f9f8179e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.631415 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rpmlf"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.631643 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.633729 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8khjj"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.633813 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.635137 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.638274 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fzzcx"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.649084 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-nltcq" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.651887 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.652913 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.657238 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jj99\" (UniqueName: \"kubernetes.io/projected/d22a61a3-fe8c-43f0-a6fc-16e5da78cd14-kube-api-access-7jj99\") pod \"console-operator-58897d9998-fb64k\" (UID: \"d22a61a3-fe8c-43f0-a6fc-16e5da78cd14\") " pod="openshift-console-operator/console-operator-58897d9998-fb64k" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.658226 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.658315 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.659112 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.665237 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4cj4\" (UniqueName: \"kubernetes.io/projected/5d492e4a-90c9-448c-b02c-e7b286055cd4-kube-api-access-j4cj4\") pod \"cluster-image-registry-operator-dc59b4c8b-tqw5v\" (UID: \"5d492e4a-90c9-448c-b02c-e7b286055cd4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.679685 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-str9s\" (UniqueName: \"kubernetes.io/projected/8149687b-dd22-4d3d-998c-efd2692db14b-kube-api-access-str9s\") pod \"cluster-samples-operator-665b6dd947-6gqph\" (UID: \"8149687b-dd22-4d3d-998c-efd2692db14b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6gqph" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.694704 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7crw\" (UniqueName: \"kubernetes.io/projected/aa868339-9c74-409d-abba-2950409b3918-kube-api-access-g7crw\") pod \"openshift-config-operator-7777fb866f-6w4d9\" (UID: \"aa868339-9c74-409d-abba-2950409b3918\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.694782 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.694825 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hzk6q"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.694839 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-xwk76"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.694855 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fsxq7"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.694868 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.694880 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.695975 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.696947 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.697936 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.699896 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.699953 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6gqph"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.704198 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-nltcq"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.704884 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k9j8h"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.706630 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.710239 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kc8q5"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.712387 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd6lb\" (UniqueName: \"kubernetes.io/projected/3615b8d9-7529-46e3-9320-1e8a70ced9a5-kube-api-access-rd6lb\") pod \"apiserver-76f77b778f-fzzcx\" (UID: \"3615b8d9-7529-46e3-9320-1e8a70ced9a5\") " pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.718436 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.722207 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-w9rkq"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.722254 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.723502 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.724717 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rgzmr"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.725940 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.727700 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7sm9"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.735846 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dww6l\" (UniqueName: \"kubernetes.io/projected/3ad4c125-6200-4d28-aeb0-8a0390508c91-kube-api-access-dww6l\") pod \"oauth-openshift-558db77b4-8khjj\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.742181 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8jdp4"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.743607 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dr4bl"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.747238 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.750900 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.750925 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lfmk9"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.752203 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.753569 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.755178 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hdn58"] Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.758784 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.758881 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz74s\" (UniqueName: \"kubernetes.io/projected/c2b9037e-4bea-4f39-8822-8c4d9f9c3b08-kube-api-access-qz74s\") pod \"apiserver-7bbb656c7d-c6bwf\" (UID: \"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.782152 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.799495 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.820186 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.824212 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.837509 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.838295 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.847041 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fb64k" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.858151 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.860629 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.864323 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/515bd73b-7a1b-414c-b189-25042a6049a3-apiservice-cert\") pod \"packageserver-d55dfcdfc-ld8cj\" (UID: \"515bd73b-7a1b-414c-b189-25042a6049a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.864350 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/515bd73b-7a1b-414c-b189-25042a6049a3-webhook-cert\") pod \"packageserver-d55dfcdfc-ld8cj\" (UID: \"515bd73b-7a1b-414c-b189-25042a6049a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.866549 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.873277 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.878595 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.898245 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.918544 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.927358 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6gqph" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.933961 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.938466 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.944245 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-serving-cert\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.958926 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.964103 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-etcd-client\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.978991 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.981324 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-config\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:27 crc kubenswrapper[4860]: I0123 08:15:27.998750 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.019403 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.038949 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.078218 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.083991 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/094b44a1-e9bb-4b69-94ad-afa0f9f8179e-proxy-tls\") pod \"machine-config-controller-84d6567774-v9sjk\" (UID: \"094b44a1-e9bb-4b69-94ad-afa0f9f8179e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.098263 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.118118 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.138592 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.159400 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.178399 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.186119 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a10289e9-f5eb-44a9-a95a-6c79fd3e4461-signing-key\") pod \"service-ca-9c57cc56f-xwk76\" (UID: \"a10289e9-f5eb-44a9-a95a-6c79fd3e4461\") " pod="openshift-service-ca/service-ca-9c57cc56f-xwk76" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.198822 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.203919 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a10289e9-f5eb-44a9-a95a-6c79fd3e4461-signing-cabundle\") pod \"service-ca-9c57cc56f-xwk76\" (UID: \"a10289e9-f5eb-44a9-a95a-6c79fd3e4461\") " pod="openshift-service-ca/service-ca-9c57cc56f-xwk76" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.239347 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.258294 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.277988 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.286722 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e9f9de0-0f93-4b24-8808-f322cea95b38-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kdjbl\" (UID: \"1e9f9de0-0f93-4b24-8808-f322cea95b38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.298316 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.302743 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9f9de0-0f93-4b24-8808-f322cea95b38-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kdjbl\" (UID: \"1e9f9de0-0f93-4b24-8808-f322cea95b38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.319620 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.339789 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.358503 4860 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.379252 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.400261 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.419005 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.439695 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.458485 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.479824 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.500261 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.517807 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-etcd-ca\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.518404 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-etcd-service-ca\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.534198 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.537918 4860 request.go:700] Waited for 1.005614181s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0 Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.539884 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.546012 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" event={"ID":"602a7467-b6cc-41e4-acd2-22f69df313e9","Type":"ContainerStarted","Data":"ce3cb69155b58cf8214fadf8f4f6a3f3135d25c8784473f854731aca236947c8"} Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.548116 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" event={"ID":"4caa2617-25ed-41f1-ab8a-4b8672a6cc54","Type":"ContainerStarted","Data":"139bbebb17314358719e4393b93ea3b09a586f9f2485b6a40662a8c21c9c92cb"} Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.559713 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.559730 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2g9xr"] Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.567077 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5"] Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.569303 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hdn58"] Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.570391 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-nltcq"] Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.578402 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 08:15:28 crc kubenswrapper[4860]: W0123 08:15:28.584533 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25973908_6ad1_4e3b_a493_de9f5baef4e8.slice/crio-3ea91b91efad4280320ea3d9003023836857a69208405818431b803a4b0dc28e WatchSource:0}: Error finding container 3ea91b91efad4280320ea3d9003023836857a69208405818431b803a4b0dc28e: Status 404 returned error can't find the container with id 3ea91b91efad4280320ea3d9003023836857a69208405818431b803a4b0dc28e Jan 23 08:15:28 crc kubenswrapper[4860]: W0123 08:15:28.588117 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a70f78c_30ad_42bb_a8d6_c7ef144db4f2.slice/crio-75974b4bea4ac0ad23c0087839eb8bd5637ab6c44e79e7c080f03562f6cab356 WatchSource:0}: Error finding container 75974b4bea4ac0ad23c0087839eb8bd5637ab6c44e79e7c080f03562f6cab356: Status 404 returned error can't find the container with id 75974b4bea4ac0ad23c0087839eb8bd5637ab6c44e79e7c080f03562f6cab356 Jan 23 08:15:28 crc kubenswrapper[4860]: W0123 08:15:28.594906 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod966345d1_b077_4fab_89b6_ca2830cbe04a.slice/crio-f32143a3f85a6368767ac6e69d9b6dd764e2b8732d5ffa4a6736be1ea3922783 WatchSource:0}: Error finding container f32143a3f85a6368767ac6e69d9b6dd764e2b8732d5ffa4a6736be1ea3922783: Status 404 returned error can't find the container with id f32143a3f85a6368767ac6e69d9b6dd764e2b8732d5ffa4a6736be1ea3922783 Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.598850 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.617786 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.644102 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.658894 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.659210 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.678584 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.699993 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.720474 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.739993 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.759869 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.778940 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.798524 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.822204 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.839857 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.853564 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf"] Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.858977 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.878640 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.898732 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.918210 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.938875 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.950854 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9"] Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.960243 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.975644 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rpmlf"] Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.978590 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 08:15:28 crc kubenswrapper[4860]: I0123 08:15:28.986413 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vtj8m"] Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.003110 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.018807 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.038595 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.046189 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw"] Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.058674 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.078264 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.078480 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6gqph"] Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.079709 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fb64k"] Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.087376 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv"] Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.091173 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fzzcx"] Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.098479 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.117807 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.122086 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v"] Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.135563 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8khjj"] Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.138826 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.157472 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.179159 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.199242 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.219252 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.239113 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.259656 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.278443 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.298580 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.320070 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.340001 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.356119 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.356413 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.356579 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:29 crc kubenswrapper[4860]: E0123 08:15:29.356899 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:45.356850527 +0000 UTC m=+51.984900762 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.359918 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.379255 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.398652 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.419400 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.439498 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.457856 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.458138 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.459502 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.479155 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.498455 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.518943 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.538912 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 08:15:29 crc kubenswrapper[4860]: W0123 08:15:29.554090 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2b9037e_4bea_4f39_8822_8c4d9f9c3b08.slice/crio-720131cd99dfce5da256fbeadf8dc4c1c6bf922cc29ac509b94bc9fa273799a0 WatchSource:0}: Error finding container 720131cd99dfce5da256fbeadf8dc4c1c6bf922cc29ac509b94bc9fa273799a0: Status 404 returned error can't find the container with id 720131cd99dfce5da256fbeadf8dc4c1c6bf922cc29ac509b94bc9fa273799a0 Jan 23 08:15:29 crc kubenswrapper[4860]: W0123 08:15:29.555993 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa868339_9c74_409d_abba_2950409b3918.slice/crio-f14201b10de611e73cce99e10be86f96a72ff708f853fa597775fe9e698f27a1 WatchSource:0}: Error finding container f14201b10de611e73cce99e10be86f96a72ff708f853fa597775fe9e698f27a1: Status 404 returned error can't find the container with id f14201b10de611e73cce99e10be86f96a72ff708f853fa597775fe9e698f27a1 Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.556621 4860 request.go:700] Waited for 1.934697436s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&limit=500&resourceVersion=0 Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.558655 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.559090 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nltcq" event={"ID":"966345d1-b077-4fab-89b6-ca2830cbe04a","Type":"ContainerStarted","Data":"f32143a3f85a6368767ac6e69d9b6dd764e2b8732d5ffa4a6736be1ea3922783"} Jan 23 08:15:29 crc kubenswrapper[4860]: W0123 08:15:29.562270 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12330295_b7da_48bc_ad66_f382f3eedace.slice/crio-118eaed401739a0c5370fd69afb900733753fe2d3495bcaf238c73c9de9a383d WatchSource:0}: Error finding container 118eaed401739a0c5370fd69afb900733753fe2d3495bcaf238c73c9de9a383d: Status 404 returned error can't find the container with id 118eaed401739a0c5370fd69afb900733753fe2d3495bcaf238c73c9de9a383d Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.564645 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" event={"ID":"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2","Type":"ContainerStarted","Data":"75974b4bea4ac0ad23c0087839eb8bd5637ab6c44e79e7c080f03562f6cab356"} Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.566553 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5" event={"ID":"9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9","Type":"ContainerStarted","Data":"102e603bb201426dc6ed0ebc4b96868284d3240a9520e2bbcba4f813a9f7b62c"} Jan 23 08:15:29 crc kubenswrapper[4860]: W0123 08:15:29.566704 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72589342_3cb8_4e43_bad0_f1726f70d77a.slice/crio-2646b5875cfcf42fb60a8f535ca252e2b87d91bd93dde960899ca030b5cec2df WatchSource:0}: Error finding container 2646b5875cfcf42fb60a8f535ca252e2b87d91bd93dde960899ca030b5cec2df: Status 404 returned error can't find the container with id 2646b5875cfcf42fb60a8f535ca252e2b87d91bd93dde960899ca030b5cec2df Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.570157 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" event={"ID":"25973908-6ad1-4e3b-a493-de9f5baef4e8","Type":"ContainerStarted","Data":"3ea91b91efad4280320ea3d9003023836857a69208405818431b803a4b0dc28e"} Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.578578 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 08:15:29 crc kubenswrapper[4860]: W0123 08:15:29.581828 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3615b8d9_7529_46e3_9320_1e8a70ced9a5.slice/crio-9f1c59412cead3b4eeb567fb6a698ed6ff197572bafb3290693a7a2cf6eb333b WatchSource:0}: Error finding container 9f1c59412cead3b4eeb567fb6a698ed6ff197572bafb3290693a7a2cf6eb333b: Status 404 returned error can't find the container with id 9f1c59412cead3b4eeb567fb6a698ed6ff197572bafb3290693a7a2cf6eb333b Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.599615 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.654813 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bcs7\" (UniqueName: \"kubernetes.io/projected/a10289e9-f5eb-44a9-a95a-6c79fd3e4461-kube-api-access-4bcs7\") pod \"service-ca-9c57cc56f-xwk76\" (UID: \"a10289e9-f5eb-44a9-a95a-6c79fd3e4461\") " pod="openshift-service-ca/service-ca-9c57cc56f-xwk76" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.673166 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbjmg\" (UniqueName: \"kubernetes.io/projected/1e9f9de0-0f93-4b24-8808-f322cea95b38-kube-api-access-jbjmg\") pod \"kube-storage-version-migrator-operator-b67b599dd-kdjbl\" (UID: \"1e9f9de0-0f93-4b24-8808-f322cea95b38\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.691302 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqjtl\" (UniqueName: \"kubernetes.io/projected/094b44a1-e9bb-4b69-94ad-afa0f9f8179e-kube-api-access-xqjtl\") pod \"machine-config-controller-84d6567774-v9sjk\" (UID: \"094b44a1-e9bb-4b69-94ad-afa0f9f8179e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.713355 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.715462 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wzz6\" (UniqueName: \"kubernetes.io/projected/7f3fedc2-c90b-4d6d-a570-30a2662feaf1-kube-api-access-5wzz6\") pod \"etcd-operator-b45778765-hzk6q\" (UID: \"7f3fedc2-c90b-4d6d-a570-30a2662feaf1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.720893 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-xwk76" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.732081 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl" Jan 23 08:15:29 crc kubenswrapper[4860]: I0123 08:15:29.998671 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" Jan 23 08:15:30 crc kubenswrapper[4860]: E0123 08:15:30.357965 4860 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 08:15:30 crc kubenswrapper[4860]: E0123 08:15:30.357990 4860 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Jan 23 08:15:30 crc kubenswrapper[4860]: E0123 08:15:30.358158 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:46.358125118 +0000 UTC m=+52.986175333 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Jan 23 08:15:30 crc kubenswrapper[4860]: E0123 08:15:30.358196 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:46.358173349 +0000 UTC m=+52.986223564 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Jan 23 08:15:30 crc kubenswrapper[4860]: E0123 08:15:30.458225 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 08:15:30 crc kubenswrapper[4860]: E0123 08:15:30.458357 4860 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.577225 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" event={"ID":"aa868339-9c74-409d-abba-2950409b3918","Type":"ContainerStarted","Data":"f14201b10de611e73cce99e10be86f96a72ff708f853fa597775fe9e698f27a1"} Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.578983 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vtj8m" event={"ID":"72589342-3cb8-4e43-bad0-f1726f70d77a","Type":"ContainerStarted","Data":"2646b5875cfcf42fb60a8f535ca252e2b87d91bd93dde960899ca030b5cec2df"} Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.580946 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fb64k" event={"ID":"d22a61a3-fe8c-43f0-a6fc-16e5da78cd14","Type":"ContainerStarted","Data":"be73ff5c18c63e2c28b478326b8ee6d1c99b409dd5a36e6584265f3568d669cf"} Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.582715 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" event={"ID":"686eac92-c672-4c4d-bf80-8e47a557a52c","Type":"ContainerStarted","Data":"c4acb6afc748f81f2514acfad525d3a4b4a2659f17c3375be01ff57d72da301a"} Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.584496 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw" event={"ID":"1842f8a2-d295-4d71-be98-450d5abc2c08","Type":"ContainerStarted","Data":"fab66154cc91d0bd40970e18227e3e1cdac999a6ca4effdaabb803f8d6ea7325"} Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.586185 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" event={"ID":"12330295-b7da-48bc-ad66-f382f3eedace","Type":"ContainerStarted","Data":"118eaed401739a0c5370fd69afb900733753fe2d3495bcaf238c73c9de9a383d"} Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.588342 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" event={"ID":"4caa2617-25ed-41f1-ab8a-4b8672a6cc54","Type":"ContainerStarted","Data":"aa5cb2e9ca846dad04b7b2aeba0c0dca0eb08a4086aed3c9c74319218a68007c"} Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.589908 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" event={"ID":"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08","Type":"ContainerStarted","Data":"720131cd99dfce5da256fbeadf8dc4c1c6bf922cc29ac509b94bc9fa273799a0"} Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.591362 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" event={"ID":"3ad4c125-6200-4d28-aeb0-8a0390508c91","Type":"ContainerStarted","Data":"796811bedb240be59f084cbdc538c465c6caa4ca2445432d39769631db7349fa"} Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.592812 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" event={"ID":"5d492e4a-90c9-448c-b02c-e7b286055cd4","Type":"ContainerStarted","Data":"b4606f4a349116d6067a74cfcfaf022b0b9d5d073152350e00ed02b6a98441f9"} Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.594505 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" event={"ID":"3615b8d9-7529-46e3-9320-1e8a70ced9a5","Type":"ContainerStarted","Data":"9f1c59412cead3b4eeb567fb6a698ed6ff197572bafb3290693a7a2cf6eb333b"} Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.702102 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.702302 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.702867 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.703051 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.703158 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.703320 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.705264 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.705671 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.705750 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.706958 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 08:15:30 crc kubenswrapper[4860]: E0123 08:15:30.708544 4860 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Jan 23 08:15:30 crc kubenswrapper[4860]: E0123 08:15:30.708595 4860 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Jan 23 08:15:30 crc kubenswrapper[4860]: E0123 08:15:30.708662 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:46.708635787 +0000 UTC m=+53.336685972 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Jan 23 08:15:30 crc kubenswrapper[4860]: E0123 08:15:30.708706 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:46.708694648 +0000 UTC m=+53.336744833 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.710030 4860 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.750661 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cswzs\" (UniqueName: \"kubernetes.io/projected/515bd73b-7a1b-414c-b189-25042a6049a3-kube-api-access-cswzs\") pod \"packageserver-d55dfcdfc-ld8cj\" (UID: \"515bd73b-7a1b-414c-b189-25042a6049a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.779441 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8hdt\" (UniqueName: \"kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-kube-api-access-c8hdt\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.779522 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-trusted-ca\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.779570 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-bound-sa-token\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.779602 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.779629 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.779674 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-registry-tls\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.779698 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-registry-certificates\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.779718 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: E0123 08:15:30.779926 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:31.2799114 +0000 UTC m=+37.907961585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.875334 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.880785 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:30 crc kubenswrapper[4860]: E0123 08:15:30.880911 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:31.380894041 +0000 UTC m=+38.008944226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881005 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/038fa3f0-7095-4011-a6f7-88f53721bdaa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hb9c8\" (UID: \"038fa3f0-7095-4011-a6f7-88f53721bdaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881072 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881102 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9115e656-a381-4ecd-b0e7-95927c882f1b-config\") pod \"kube-controller-manager-operator-78b949d7b-5k5dm\" (UID: \"9115e656-a381-4ecd-b0e7-95927c882f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881144 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aa04c3ab-1fb1-45e2-b236-64364a2112ef-metrics-tls\") pod \"ingress-operator-5b745b69d9-8ctp5\" (UID: \"aa04c3ab-1fb1-45e2-b236-64364a2112ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881198 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-registry-tls\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881225 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0718633a-1124-4481-8f1a-002784b41ea8-node-bootstrap-token\") pod \"machine-config-server-rlmgs\" (UID: \"0718633a-1124-4481-8f1a-002784b41ea8\") " pod="openshift-machine-config-operator/machine-config-server-rlmgs" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881247 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/93b8881a-ddc3-4801-a206-27116eba7c50-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9kmm4\" (UID: \"93b8881a-ddc3-4801-a206-27116eba7c50\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881273 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881325 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzkdc\" (UniqueName: \"kubernetes.io/projected/d8bc0ea0-46f4-4c4b-b008-394713ae0863-kube-api-access-kzkdc\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7sm9\" (UID: \"d8bc0ea0-46f4-4c4b-b008-394713ae0863\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7sm9" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881352 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s58g7\" (UniqueName: \"kubernetes.io/projected/0817083f-3358-4e5a-ab54-be291b5d20d9-kube-api-access-s58g7\") pod \"dns-default-w9rkq\" (UID: \"0817083f-3358-4e5a-ab54-be291b5d20d9\") " pod="openshift-dns/dns-default-w9rkq" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881377 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1aed041-7391-4d7e-9ed3-b54294438e6f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lv7rq\" (UID: \"a1aed041-7391-4d7e-9ed3-b54294438e6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881428 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtpht\" (UniqueName: \"kubernetes.io/projected/fef7b17c-47aa-41ef-83b0-753aba1cac55-kube-api-access-wtpht\") pod \"cni-sysctl-allowlist-ds-fzmlp\" (UID: \"fef7b17c-47aa-41ef-83b0-753aba1cac55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881452 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twf2z\" (UniqueName: \"kubernetes.io/projected/b880fa46-0915-4446-b429-94148624a92d-kube-api-access-twf2z\") pod \"ingress-canary-dr4bl\" (UID: \"b880fa46-0915-4446-b429-94148624a92d\") " pod="openshift-ingress-canary/ingress-canary-dr4bl" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881509 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8hdt\" (UniqueName: \"kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-kube-api-access-c8hdt\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881537 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r5vb\" (UniqueName: \"kubernetes.io/projected/2661d5fe-77bd-44ac-a136-5362fae787f8-kube-api-access-5r5vb\") pod \"marketplace-operator-79b997595-k9j8h\" (UID: \"2661d5fe-77bd-44ac-a136-5362fae787f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881563 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rg7p\" (UniqueName: \"kubernetes.io/projected/0718633a-1124-4481-8f1a-002784b41ea8-kube-api-access-2rg7p\") pod \"machine-config-server-rlmgs\" (UID: \"0718633a-1124-4481-8f1a-002784b41ea8\") " pod="openshift-machine-config-operator/machine-config-server-rlmgs" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881589 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zqg9\" (UniqueName: \"kubernetes.io/projected/aa04c3ab-1fb1-45e2-b236-64364a2112ef-kube-api-access-4zqg9\") pod \"ingress-operator-5b745b69d9-8ctp5\" (UID: \"aa04c3ab-1fb1-45e2-b236-64364a2112ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881602 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881616 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d43a2987-8474-4e12-bf30-0540d082c40a-secret-volume\") pod \"collect-profiles-29485935-6l6fj\" (UID: \"d43a2987-8474-4e12-bf30-0540d082c40a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881686 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2661d5fe-77bd-44ac-a136-5362fae787f8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k9j8h\" (UID: \"2661d5fe-77bd-44ac-a136-5362fae787f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881713 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-registration-dir\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881741 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f39d400-c199-46b0-a15c-76038a8c5331-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c6h5g\" (UID: \"8f39d400-c199-46b0-a15c-76038a8c5331\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881781 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5dfda28-7a8d-45a0-81e0-60fe859754bd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9r4z6\" (UID: \"f5dfda28-7a8d-45a0-81e0-60fe859754bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.881866 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5dfda28-7a8d-45a0-81e0-60fe859754bd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9r4z6\" (UID: \"f5dfda28-7a8d-45a0-81e0-60fe859754bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.882161 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-trusted-ca\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.882197 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd2d4f0-43da-4722-b619-ba5b470d3d16-serving-cert\") pod \"service-ca-operator-777779d784-ptbvt\" (UID: \"1cd2d4f0-43da-4722-b619-ba5b470d3d16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.882320 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/93b8881a-ddc3-4801-a206-27116eba7c50-images\") pod \"machine-config-operator-74547568cd-9kmm4\" (UID: \"93b8881a-ddc3-4801-a206-27116eba7c50\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.882395 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvhqr\" (UniqueName: \"kubernetes.io/projected/c51ed869-7936-40f3-963f-5744bbf20a71-kube-api-access-lvhqr\") pod \"dns-operator-744455d44c-rgzmr\" (UID: \"c51ed869-7936-40f3-963f-5744bbf20a71\") " pod="openshift-dns-operator/dns-operator-744455d44c-rgzmr" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.882460 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zffq\" (UniqueName: \"kubernetes.io/projected/0aacdf8d-27bb-447d-97f4-03e459b5e8e2-kube-api-access-9zffq\") pod \"catalog-operator-68c6474976-brdj6\" (UID: \"0aacdf8d-27bb-447d-97f4-03e459b5e8e2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.882487 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-socket-dir\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.882539 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f39d400-c199-46b0-a15c-76038a8c5331-srv-cert\") pod \"olm-operator-6b444d44fb-c6h5g\" (UID: \"8f39d400-c199-46b0-a15c-76038a8c5331\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.882939 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9d48bd31-b86d-4b22-aa0c-278425b2dbb6-stats-auth\") pod \"router-default-5444994796-dkzf5\" (UID: \"9d48bd31-b86d-4b22-aa0c-278425b2dbb6\") " pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.882980 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fef7b17c-47aa-41ef-83b0-753aba1cac55-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-fzmlp\" (UID: \"fef7b17c-47aa-41ef-83b0-753aba1cac55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.883005 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0718633a-1124-4481-8f1a-002784b41ea8-certs\") pod \"machine-config-server-rlmgs\" (UID: \"0718633a-1124-4481-8f1a-002784b41ea8\") " pod="openshift-machine-config-operator/machine-config-server-rlmgs" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.883047 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2661d5fe-77bd-44ac-a136-5362fae787f8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k9j8h\" (UID: \"2661d5fe-77bd-44ac-a136-5362fae787f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.883071 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0817083f-3358-4e5a-ab54-be291b5d20d9-config-volume\") pod \"dns-default-w9rkq\" (UID: \"0817083f-3358-4e5a-ab54-be291b5d20d9\") " pod="openshift-dns/dns-default-w9rkq" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.883100 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.883120 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0817083f-3358-4e5a-ab54-be291b5d20d9-metrics-tls\") pod \"dns-default-w9rkq\" (UID: \"0817083f-3358-4e5a-ab54-be291b5d20d9\") " pod="openshift-dns/dns-default-w9rkq" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.883138 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmqht\" (UniqueName: \"kubernetes.io/projected/a1aed041-7391-4d7e-9ed3-b54294438e6f-kube-api-access-lmqht\") pod \"package-server-manager-789f6589d5-lv7rq\" (UID: \"a1aed041-7391-4d7e-9ed3-b54294438e6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.883187 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d43a2987-8474-4e12-bf30-0540d082c40a-config-volume\") pod \"collect-profiles-29485935-6l6fj\" (UID: \"d43a2987-8474-4e12-bf30-0540d082c40a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.883218 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/fef7b17c-47aa-41ef-83b0-753aba1cac55-ready\") pod \"cni-sysctl-allowlist-ds-fzmlp\" (UID: \"fef7b17c-47aa-41ef-83b0-753aba1cac55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.883250 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d48bd31-b86d-4b22-aa0c-278425b2dbb6-metrics-certs\") pod \"router-default-5444994796-dkzf5\" (UID: \"9d48bd31-b86d-4b22-aa0c-278425b2dbb6\") " pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.883270 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa04c3ab-1fb1-45e2-b236-64364a2112ef-trusted-ca\") pod \"ingress-operator-5b745b69d9-8ctp5\" (UID: \"aa04c3ab-1fb1-45e2-b236-64364a2112ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.883292 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xv6c\" (UniqueName: \"kubernetes.io/projected/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-kube-api-access-6xv6c\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.883311 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d48bd31-b86d-4b22-aa0c-278425b2dbb6-service-ca-bundle\") pod \"router-default-5444994796-dkzf5\" (UID: \"9d48bd31-b86d-4b22-aa0c-278425b2dbb6\") " pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.883337 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-registry-certificates\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.883358 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-mountpoint-dir\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.883397 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj8fr\" (UniqueName: \"kubernetes.io/projected/5d4dcd04-42ad-44e1-99cd-26ca78b8fa4c-kube-api-access-pj8fr\") pod \"multus-admission-controller-857f4d67dd-8jdp4\" (UID: \"5d4dcd04-42ad-44e1-99cd-26ca78b8fa4c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8jdp4" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.883418 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9115e656-a381-4ecd-b0e7-95927c882f1b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5k5dm\" (UID: \"9115e656-a381-4ecd-b0e7-95927c882f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.884540 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zxfh\" (UniqueName: \"kubernetes.io/projected/8f39d400-c199-46b0-a15c-76038a8c5331-kube-api-access-5zxfh\") pod \"olm-operator-6b444d44fb-c6h5g\" (UID: \"8f39d400-c199-46b0-a15c-76038a8c5331\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.884562 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fef7b17c-47aa-41ef-83b0-753aba1cac55-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-fzmlp\" (UID: \"fef7b17c-47aa-41ef-83b0-753aba1cac55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.884579 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qz4h\" (UniqueName: \"kubernetes.io/projected/9d48bd31-b86d-4b22-aa0c-278425b2dbb6-kube-api-access-7qz4h\") pod \"router-default-5444994796-dkzf5\" (UID: \"9d48bd31-b86d-4b22-aa0c-278425b2dbb6\") " pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.884595 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/038fa3f0-7095-4011-a6f7-88f53721bdaa-config\") pod \"kube-apiserver-operator-766d6c64bb-hb9c8\" (UID: \"038fa3f0-7095-4011-a6f7-88f53721bdaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.884637 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6smj5\" (UniqueName: \"kubernetes.io/projected/cbfc04c3-e456-45d8-9787-3eccb3a8c786-kube-api-access-6smj5\") pod \"migrator-59844c95c7-lfmk9\" (UID: \"cbfc04c3-e456-45d8-9787-3eccb3a8c786\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lfmk9" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.884655 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d8bc0ea0-46f4-4c4b-b008-394713ae0863-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7sm9\" (UID: \"d8bc0ea0-46f4-4c4b-b008-394713ae0863\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7sm9" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.884673 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9115e656-a381-4ecd-b0e7-95927c882f1b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5k5dm\" (UID: \"9115e656-a381-4ecd-b0e7-95927c882f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm" Jan 23 08:15:30 crc kubenswrapper[4860]: E0123 08:15:30.885168 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:31.384758088 +0000 UTC m=+38.012808273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885210 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd2d4f0-43da-4722-b619-ba5b470d3d16-config\") pod \"service-ca-operator-777779d784-ptbvt\" (UID: \"1cd2d4f0-43da-4722-b619-ba5b470d3d16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885257 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0aacdf8d-27bb-447d-97f4-03e459b5e8e2-profile-collector-cert\") pod \"catalog-operator-68c6474976-brdj6\" (UID: \"0aacdf8d-27bb-447d-97f4-03e459b5e8e2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885284 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr5d4\" (UniqueName: \"kubernetes.io/projected/93b8881a-ddc3-4801-a206-27116eba7c50-kube-api-access-hr5d4\") pod \"machine-config-operator-74547568cd-9kmm4\" (UID: \"93b8881a-ddc3-4801-a206-27116eba7c50\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885309 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0aacdf8d-27bb-447d-97f4-03e459b5e8e2-srv-cert\") pod \"catalog-operator-68c6474976-brdj6\" (UID: \"0aacdf8d-27bb-447d-97f4-03e459b5e8e2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885326 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5dfda28-7a8d-45a0-81e0-60fe859754bd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9r4z6\" (UID: \"f5dfda28-7a8d-45a0-81e0-60fe859754bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885466 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-csi-data-dir\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885513 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9d48bd31-b86d-4b22-aa0c-278425b2dbb6-default-certificate\") pod \"router-default-5444994796-dkzf5\" (UID: \"9d48bd31-b86d-4b22-aa0c-278425b2dbb6\") " pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885537 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-registry-certificates\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885543 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-plugins-dir\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885592 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4jf6\" (UniqueName: \"kubernetes.io/projected/d43a2987-8474-4e12-bf30-0540d082c40a-kube-api-access-m4jf6\") pod \"collect-profiles-29485935-6l6fj\" (UID: \"d43a2987-8474-4e12-bf30-0540d082c40a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885638 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b880fa46-0915-4446-b429-94148624a92d-cert\") pod \"ingress-canary-dr4bl\" (UID: \"b880fa46-0915-4446-b429-94148624a92d\") " pod="openshift-ingress-canary/ingress-canary-dr4bl" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885659 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/038fa3f0-7095-4011-a6f7-88f53721bdaa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hb9c8\" (UID: \"038fa3f0-7095-4011-a6f7-88f53721bdaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885675 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c51ed869-7936-40f3-963f-5744bbf20a71-metrics-tls\") pod \"dns-operator-744455d44c-rgzmr\" (UID: \"c51ed869-7936-40f3-963f-5744bbf20a71\") " pod="openshift-dns-operator/dns-operator-744455d44c-rgzmr" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885695 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/93b8881a-ddc3-4801-a206-27116eba7c50-proxy-tls\") pod \"machine-config-operator-74547568cd-9kmm4\" (UID: \"93b8881a-ddc3-4801-a206-27116eba7c50\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885715 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp88w\" (UniqueName: \"kubernetes.io/projected/1cd2d4f0-43da-4722-b619-ba5b470d3d16-kube-api-access-cp88w\") pod \"service-ca-operator-777779d784-ptbvt\" (UID: \"1cd2d4f0-43da-4722-b619-ba5b470d3d16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885729 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5d4dcd04-42ad-44e1-99cd-26ca78b8fa4c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8jdp4\" (UID: \"5d4dcd04-42ad-44e1-99cd-26ca78b8fa4c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8jdp4" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885755 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-bound-sa-token\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.885771 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa04c3ab-1fb1-45e2-b236-64364a2112ef-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8ctp5\" (UID: \"aa04c3ab-1fb1-45e2-b236-64364a2112ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.886610 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-registry-tls\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.886799 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-trusted-ca\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.887448 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.897985 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8hdt\" (UniqueName: \"kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-kube-api-access-c8hdt\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.907086 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-bound-sa-token\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.986325 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:30 crc kubenswrapper[4860]: E0123 08:15:30.986567 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:31.486521729 +0000 UTC m=+38.114571924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.986878 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d43a2987-8474-4e12-bf30-0540d082c40a-secret-volume\") pod \"collect-profiles-29485935-6l6fj\" (UID: \"d43a2987-8474-4e12-bf30-0540d082c40a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.986932 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2661d5fe-77bd-44ac-a136-5362fae787f8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k9j8h\" (UID: \"2661d5fe-77bd-44ac-a136-5362fae787f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.986954 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-registration-dir\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.986980 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f39d400-c199-46b0-a15c-76038a8c5331-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c6h5g\" (UID: \"8f39d400-c199-46b0-a15c-76038a8c5331\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987031 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5dfda28-7a8d-45a0-81e0-60fe859754bd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9r4z6\" (UID: \"f5dfda28-7a8d-45a0-81e0-60fe859754bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987049 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5dfda28-7a8d-45a0-81e0-60fe859754bd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9r4z6\" (UID: \"f5dfda28-7a8d-45a0-81e0-60fe859754bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987066 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd2d4f0-43da-4722-b619-ba5b470d3d16-serving-cert\") pod \"service-ca-operator-777779d784-ptbvt\" (UID: \"1cd2d4f0-43da-4722-b619-ba5b470d3d16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987122 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/93b8881a-ddc3-4801-a206-27116eba7c50-images\") pod \"machine-config-operator-74547568cd-9kmm4\" (UID: \"93b8881a-ddc3-4801-a206-27116eba7c50\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987148 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvhqr\" (UniqueName: \"kubernetes.io/projected/c51ed869-7936-40f3-963f-5744bbf20a71-kube-api-access-lvhqr\") pod \"dns-operator-744455d44c-rgzmr\" (UID: \"c51ed869-7936-40f3-963f-5744bbf20a71\") " pod="openshift-dns-operator/dns-operator-744455d44c-rgzmr" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987189 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zffq\" (UniqueName: \"kubernetes.io/projected/0aacdf8d-27bb-447d-97f4-03e459b5e8e2-kube-api-access-9zffq\") pod \"catalog-operator-68c6474976-brdj6\" (UID: \"0aacdf8d-27bb-447d-97f4-03e459b5e8e2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987211 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-socket-dir\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987226 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f39d400-c199-46b0-a15c-76038a8c5331-srv-cert\") pod \"olm-operator-6b444d44fb-c6h5g\" (UID: \"8f39d400-c199-46b0-a15c-76038a8c5331\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987278 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9d48bd31-b86d-4b22-aa0c-278425b2dbb6-stats-auth\") pod \"router-default-5444994796-dkzf5\" (UID: \"9d48bd31-b86d-4b22-aa0c-278425b2dbb6\") " pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987302 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fef7b17c-47aa-41ef-83b0-753aba1cac55-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-fzmlp\" (UID: \"fef7b17c-47aa-41ef-83b0-753aba1cac55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987319 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0718633a-1124-4481-8f1a-002784b41ea8-certs\") pod \"machine-config-server-rlmgs\" (UID: \"0718633a-1124-4481-8f1a-002784b41ea8\") " pod="openshift-machine-config-operator/machine-config-server-rlmgs" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987384 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2661d5fe-77bd-44ac-a136-5362fae787f8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k9j8h\" (UID: \"2661d5fe-77bd-44ac-a136-5362fae787f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987450 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0817083f-3358-4e5a-ab54-be291b5d20d9-config-volume\") pod \"dns-default-w9rkq\" (UID: \"0817083f-3358-4e5a-ab54-be291b5d20d9\") " pod="openshift-dns/dns-default-w9rkq" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987490 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987521 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-registration-dir\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987545 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0817083f-3358-4e5a-ab54-be291b5d20d9-metrics-tls\") pod \"dns-default-w9rkq\" (UID: \"0817083f-3358-4e5a-ab54-be291b5d20d9\") " pod="openshift-dns/dns-default-w9rkq" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.987794 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-socket-dir\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.988135 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmqht\" (UniqueName: \"kubernetes.io/projected/a1aed041-7391-4d7e-9ed3-b54294438e6f-kube-api-access-lmqht\") pod \"package-server-manager-789f6589d5-lv7rq\" (UID: \"a1aed041-7391-4d7e-9ed3-b54294438e6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq" Jan 23 08:15:30 crc kubenswrapper[4860]: E0123 08:15:30.988292 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:31.488231921 +0000 UTC m=+38.116282186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.988320 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d43a2987-8474-4e12-bf30-0540d082c40a-config-volume\") pod \"collect-profiles-29485935-6l6fj\" (UID: \"d43a2987-8474-4e12-bf30-0540d082c40a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.988371 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/fef7b17c-47aa-41ef-83b0-753aba1cac55-ready\") pod \"cni-sysctl-allowlist-ds-fzmlp\" (UID: \"fef7b17c-47aa-41ef-83b0-753aba1cac55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.988392 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa04c3ab-1fb1-45e2-b236-64364a2112ef-trusted-ca\") pod \"ingress-operator-5b745b69d9-8ctp5\" (UID: \"aa04c3ab-1fb1-45e2-b236-64364a2112ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.988408 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xv6c\" (UniqueName: \"kubernetes.io/projected/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-kube-api-access-6xv6c\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.988614 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d48bd31-b86d-4b22-aa0c-278425b2dbb6-metrics-certs\") pod \"router-default-5444994796-dkzf5\" (UID: \"9d48bd31-b86d-4b22-aa0c-278425b2dbb6\") " pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.990117 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d48bd31-b86d-4b22-aa0c-278425b2dbb6-service-ca-bundle\") pod \"router-default-5444994796-dkzf5\" (UID: \"9d48bd31-b86d-4b22-aa0c-278425b2dbb6\") " pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.990148 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-mountpoint-dir\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.990195 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj8fr\" (UniqueName: \"kubernetes.io/projected/5d4dcd04-42ad-44e1-99cd-26ca78b8fa4c-kube-api-access-pj8fr\") pod \"multus-admission-controller-857f4d67dd-8jdp4\" (UID: \"5d4dcd04-42ad-44e1-99cd-26ca78b8fa4c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8jdp4" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.990213 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9115e656-a381-4ecd-b0e7-95927c882f1b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5k5dm\" (UID: \"9115e656-a381-4ecd-b0e7-95927c882f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.990262 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fef7b17c-47aa-41ef-83b0-753aba1cac55-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-fzmlp\" (UID: \"fef7b17c-47aa-41ef-83b0-753aba1cac55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.990280 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qz4h\" (UniqueName: \"kubernetes.io/projected/9d48bd31-b86d-4b22-aa0c-278425b2dbb6-kube-api-access-7qz4h\") pod \"router-default-5444994796-dkzf5\" (UID: \"9d48bd31-b86d-4b22-aa0c-278425b2dbb6\") " pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.990299 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/038fa3f0-7095-4011-a6f7-88f53721bdaa-config\") pod \"kube-apiserver-operator-766d6c64bb-hb9c8\" (UID: \"038fa3f0-7095-4011-a6f7-88f53721bdaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.990337 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zxfh\" (UniqueName: \"kubernetes.io/projected/8f39d400-c199-46b0-a15c-76038a8c5331-kube-api-access-5zxfh\") pod \"olm-operator-6b444d44fb-c6h5g\" (UID: \"8f39d400-c199-46b0-a15c-76038a8c5331\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.990358 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6smj5\" (UniqueName: \"kubernetes.io/projected/cbfc04c3-e456-45d8-9787-3eccb3a8c786-kube-api-access-6smj5\") pod \"migrator-59844c95c7-lfmk9\" (UID: \"cbfc04c3-e456-45d8-9787-3eccb3a8c786\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lfmk9" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.990376 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d8bc0ea0-46f4-4c4b-b008-394713ae0863-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7sm9\" (UID: \"d8bc0ea0-46f4-4c4b-b008-394713ae0863\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7sm9" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.990413 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9115e656-a381-4ecd-b0e7-95927c882f1b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5k5dm\" (UID: \"9115e656-a381-4ecd-b0e7-95927c882f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.990435 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd2d4f0-43da-4722-b619-ba5b470d3d16-config\") pod \"service-ca-operator-777779d784-ptbvt\" (UID: \"1cd2d4f0-43da-4722-b619-ba5b470d3d16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.990458 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0aacdf8d-27bb-447d-97f4-03e459b5e8e2-profile-collector-cert\") pod \"catalog-operator-68c6474976-brdj6\" (UID: \"0aacdf8d-27bb-447d-97f4-03e459b5e8e2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.990500 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr5d4\" (UniqueName: \"kubernetes.io/projected/93b8881a-ddc3-4801-a206-27116eba7c50-kube-api-access-hr5d4\") pod \"machine-config-operator-74547568cd-9kmm4\" (UID: \"93b8881a-ddc3-4801-a206-27116eba7c50\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.990531 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0aacdf8d-27bb-447d-97f4-03e459b5e8e2-srv-cert\") pod \"catalog-operator-68c6474976-brdj6\" (UID: \"0aacdf8d-27bb-447d-97f4-03e459b5e8e2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.990590 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa04c3ab-1fb1-45e2-b236-64364a2112ef-trusted-ca\") pod \"ingress-operator-5b745b69d9-8ctp5\" (UID: \"aa04c3ab-1fb1-45e2-b236-64364a2112ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.989795 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0817083f-3358-4e5a-ab54-be291b5d20d9-config-volume\") pod \"dns-default-w9rkq\" (UID: \"0817083f-3358-4e5a-ab54-be291b5d20d9\") " pod="openshift-dns/dns-default-w9rkq" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.989099 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/93b8881a-ddc3-4801-a206-27116eba7c50-images\") pod \"machine-config-operator-74547568cd-9kmm4\" (UID: \"93b8881a-ddc3-4801-a206-27116eba7c50\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.992125 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fef7b17c-47aa-41ef-83b0-753aba1cac55-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-fzmlp\" (UID: \"fef7b17c-47aa-41ef-83b0-753aba1cac55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.993460 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5dfda28-7a8d-45a0-81e0-60fe859754bd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9r4z6\" (UID: \"f5dfda28-7a8d-45a0-81e0-60fe859754bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.992475 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0817083f-3358-4e5a-ab54-be291b5d20d9-metrics-tls\") pod \"dns-default-w9rkq\" (UID: \"0817083f-3358-4e5a-ab54-be291b5d20d9\") " pod="openshift-dns/dns-default-w9rkq" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.994249 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-mountpoint-dir\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.995732 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d43a2987-8474-4e12-bf30-0540d082c40a-config-volume\") pod \"collect-profiles-29485935-6l6fj\" (UID: \"d43a2987-8474-4e12-bf30-0540d082c40a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" Jan 23 08:15:30 crc kubenswrapper[4860]: I0123 08:15:30.996358 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cd2d4f0-43da-4722-b619-ba5b470d3d16-serving-cert\") pod \"service-ca-operator-777779d784-ptbvt\" (UID: \"1cd2d4f0-43da-4722-b619-ba5b470d3d16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.000522 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fef7b17c-47aa-41ef-83b0-753aba1cac55-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-fzmlp\" (UID: \"fef7b17c-47aa-41ef-83b0-753aba1cac55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.000815 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5dfda28-7a8d-45a0-81e0-60fe859754bd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9r4z6\" (UID: \"f5dfda28-7a8d-45a0-81e0-60fe859754bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.000914 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-csi-data-dir\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.000961 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9d48bd31-b86d-4b22-aa0c-278425b2dbb6-default-certificate\") pod \"router-default-5444994796-dkzf5\" (UID: \"9d48bd31-b86d-4b22-aa0c-278425b2dbb6\") " pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.001128 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-csi-data-dir\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.002838 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d48bd31-b86d-4b22-aa0c-278425b2dbb6-service-ca-bundle\") pod \"router-default-5444994796-dkzf5\" (UID: \"9d48bd31-b86d-4b22-aa0c-278425b2dbb6\") " pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003181 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-plugins-dir\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003261 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b880fa46-0915-4446-b429-94148624a92d-cert\") pod \"ingress-canary-dr4bl\" (UID: \"b880fa46-0915-4446-b429-94148624a92d\") " pod="openshift-ingress-canary/ingress-canary-dr4bl" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003301 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4jf6\" (UniqueName: \"kubernetes.io/projected/d43a2987-8474-4e12-bf30-0540d082c40a-kube-api-access-m4jf6\") pod \"collect-profiles-29485935-6l6fj\" (UID: \"d43a2987-8474-4e12-bf30-0540d082c40a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003350 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/038fa3f0-7095-4011-a6f7-88f53721bdaa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hb9c8\" (UID: \"038fa3f0-7095-4011-a6f7-88f53721bdaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003388 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c51ed869-7936-40f3-963f-5744bbf20a71-metrics-tls\") pod \"dns-operator-744455d44c-rgzmr\" (UID: \"c51ed869-7936-40f3-963f-5744bbf20a71\") " pod="openshift-dns-operator/dns-operator-744455d44c-rgzmr" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003429 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/93b8881a-ddc3-4801-a206-27116eba7c50-proxy-tls\") pod \"machine-config-operator-74547568cd-9kmm4\" (UID: \"93b8881a-ddc3-4801-a206-27116eba7c50\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003470 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp88w\" (UniqueName: \"kubernetes.io/projected/1cd2d4f0-43da-4722-b619-ba5b470d3d16-kube-api-access-cp88w\") pod \"service-ca-operator-777779d784-ptbvt\" (UID: \"1cd2d4f0-43da-4722-b619-ba5b470d3d16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003506 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5d4dcd04-42ad-44e1-99cd-26ca78b8fa4c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8jdp4\" (UID: \"5d4dcd04-42ad-44e1-99cd-26ca78b8fa4c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8jdp4" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003545 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa04c3ab-1fb1-45e2-b236-64364a2112ef-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8ctp5\" (UID: \"aa04c3ab-1fb1-45e2-b236-64364a2112ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003588 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/038fa3f0-7095-4011-a6f7-88f53721bdaa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hb9c8\" (UID: \"038fa3f0-7095-4011-a6f7-88f53721bdaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003645 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d43a2987-8474-4e12-bf30-0540d082c40a-secret-volume\") pod \"collect-profiles-29485935-6l6fj\" (UID: \"d43a2987-8474-4e12-bf30-0540d082c40a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003644 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9115e656-a381-4ecd-b0e7-95927c882f1b-config\") pod \"kube-controller-manager-operator-78b949d7b-5k5dm\" (UID: \"9115e656-a381-4ecd-b0e7-95927c882f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003726 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aa04c3ab-1fb1-45e2-b236-64364a2112ef-metrics-tls\") pod \"ingress-operator-5b745b69d9-8ctp5\" (UID: \"aa04c3ab-1fb1-45e2-b236-64364a2112ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003770 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0718633a-1124-4481-8f1a-002784b41ea8-node-bootstrap-token\") pod \"machine-config-server-rlmgs\" (UID: \"0718633a-1124-4481-8f1a-002784b41ea8\") " pod="openshift-machine-config-operator/machine-config-server-rlmgs" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003798 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/93b8881a-ddc3-4801-a206-27116eba7c50-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9kmm4\" (UID: \"93b8881a-ddc3-4801-a206-27116eba7c50\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003837 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzkdc\" (UniqueName: \"kubernetes.io/projected/d8bc0ea0-46f4-4c4b-b008-394713ae0863-kube-api-access-kzkdc\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7sm9\" (UID: \"d8bc0ea0-46f4-4c4b-b008-394713ae0863\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7sm9" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003860 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s58g7\" (UniqueName: \"kubernetes.io/projected/0817083f-3358-4e5a-ab54-be291b5d20d9-kube-api-access-s58g7\") pod \"dns-default-w9rkq\" (UID: \"0817083f-3358-4e5a-ab54-be291b5d20d9\") " pod="openshift-dns/dns-default-w9rkq" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003882 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1aed041-7391-4d7e-9ed3-b54294438e6f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lv7rq\" (UID: \"a1aed041-7391-4d7e-9ed3-b54294438e6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003912 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtpht\" (UniqueName: \"kubernetes.io/projected/fef7b17c-47aa-41ef-83b0-753aba1cac55-kube-api-access-wtpht\") pod \"cni-sysctl-allowlist-ds-fzmlp\" (UID: \"fef7b17c-47aa-41ef-83b0-753aba1cac55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003937 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twf2z\" (UniqueName: \"kubernetes.io/projected/b880fa46-0915-4446-b429-94148624a92d-kube-api-access-twf2z\") pod \"ingress-canary-dr4bl\" (UID: \"b880fa46-0915-4446-b429-94148624a92d\") " pod="openshift-ingress-canary/ingress-canary-dr4bl" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.003981 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r5vb\" (UniqueName: \"kubernetes.io/projected/2661d5fe-77bd-44ac-a136-5362fae787f8-kube-api-access-5r5vb\") pod \"marketplace-operator-79b997595-k9j8h\" (UID: \"2661d5fe-77bd-44ac-a136-5362fae787f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.004006 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rg7p\" (UniqueName: \"kubernetes.io/projected/0718633a-1124-4481-8f1a-002784b41ea8-kube-api-access-2rg7p\") pod \"machine-config-server-rlmgs\" (UID: \"0718633a-1124-4481-8f1a-002784b41ea8\") " pod="openshift-machine-config-operator/machine-config-server-rlmgs" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.004052 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zqg9\" (UniqueName: \"kubernetes.io/projected/aa04c3ab-1fb1-45e2-b236-64364a2112ef-kube-api-access-4zqg9\") pod \"ingress-operator-5b745b69d9-8ctp5\" (UID: \"aa04c3ab-1fb1-45e2-b236-64364a2112ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.004789 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9115e656-a381-4ecd-b0e7-95927c882f1b-config\") pod \"kube-controller-manager-operator-78b949d7b-5k5dm\" (UID: \"9115e656-a381-4ecd-b0e7-95927c882f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.004896 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-plugins-dir\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.009685 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/93b8881a-ddc3-4801-a206-27116eba7c50-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9kmm4\" (UID: \"93b8881a-ddc3-4801-a206-27116eba7c50\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.010237 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d48bd31-b86d-4b22-aa0c-278425b2dbb6-metrics-certs\") pod \"router-default-5444994796-dkzf5\" (UID: \"9d48bd31-b86d-4b22-aa0c-278425b2dbb6\") " pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.010341 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0aacdf8d-27bb-447d-97f4-03e459b5e8e2-srv-cert\") pod \"catalog-operator-68c6474976-brdj6\" (UID: \"0aacdf8d-27bb-447d-97f4-03e459b5e8e2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.011560 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aa04c3ab-1fb1-45e2-b236-64364a2112ef-metrics-tls\") pod \"ingress-operator-5b745b69d9-8ctp5\" (UID: \"aa04c3ab-1fb1-45e2-b236-64364a2112ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.012458 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6smj5\" (UniqueName: \"kubernetes.io/projected/cbfc04c3-e456-45d8-9787-3eccb3a8c786-kube-api-access-6smj5\") pod \"migrator-59844c95c7-lfmk9\" (UID: \"cbfc04c3-e456-45d8-9787-3eccb3a8c786\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lfmk9" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.012672 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9d48bd31-b86d-4b22-aa0c-278425b2dbb6-default-certificate\") pod \"router-default-5444994796-dkzf5\" (UID: \"9d48bd31-b86d-4b22-aa0c-278425b2dbb6\") " pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.012802 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9d48bd31-b86d-4b22-aa0c-278425b2dbb6-stats-auth\") pod \"router-default-5444994796-dkzf5\" (UID: \"9d48bd31-b86d-4b22-aa0c-278425b2dbb6\") " pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.012908 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9115e656-a381-4ecd-b0e7-95927c882f1b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5k5dm\" (UID: \"9115e656-a381-4ecd-b0e7-95927c882f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.012915 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5dfda28-7a8d-45a0-81e0-60fe859754bd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9r4z6\" (UID: \"f5dfda28-7a8d-45a0-81e0-60fe859754bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.013406 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/93b8881a-ddc3-4801-a206-27116eba7c50-proxy-tls\") pod \"machine-config-operator-74547568cd-9kmm4\" (UID: \"93b8881a-ddc3-4801-a206-27116eba7c50\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.013653 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0aacdf8d-27bb-447d-97f4-03e459b5e8e2-profile-collector-cert\") pod \"catalog-operator-68c6474976-brdj6\" (UID: \"0aacdf8d-27bb-447d-97f4-03e459b5e8e2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.013684 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1aed041-7391-4d7e-9ed3-b54294438e6f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lv7rq\" (UID: \"a1aed041-7391-4d7e-9ed3-b54294438e6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.014856 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f39d400-c199-46b0-a15c-76038a8c5331-srv-cert\") pod \"olm-operator-6b444d44fb-c6h5g\" (UID: \"8f39d400-c199-46b0-a15c-76038a8c5331\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.015557 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0718633a-1124-4481-8f1a-002784b41ea8-node-bootstrap-token\") pod \"machine-config-server-rlmgs\" (UID: \"0718633a-1124-4481-8f1a-002784b41ea8\") " pod="openshift-machine-config-operator/machine-config-server-rlmgs" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.015690 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvhqr\" (UniqueName: \"kubernetes.io/projected/c51ed869-7936-40f3-963f-5744bbf20a71-kube-api-access-lvhqr\") pod \"dns-operator-744455d44c-rgzmr\" (UID: \"c51ed869-7936-40f3-963f-5744bbf20a71\") " pod="openshift-dns-operator/dns-operator-744455d44c-rgzmr" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.015948 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2661d5fe-77bd-44ac-a136-5362fae787f8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k9j8h\" (UID: \"2661d5fe-77bd-44ac-a136-5362fae787f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.018379 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5d4dcd04-42ad-44e1-99cd-26ca78b8fa4c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8jdp4\" (UID: \"5d4dcd04-42ad-44e1-99cd-26ca78b8fa4c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8jdp4" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.018972 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5dfda28-7a8d-45a0-81e0-60fe859754bd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9r4z6\" (UID: \"f5dfda28-7a8d-45a0-81e0-60fe859754bd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.019205 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b880fa46-0915-4446-b429-94148624a92d-cert\") pod \"ingress-canary-dr4bl\" (UID: \"b880fa46-0915-4446-b429-94148624a92d\") " pod="openshift-ingress-canary/ingress-canary-dr4bl" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.019656 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9115e656-a381-4ecd-b0e7-95927c882f1b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5k5dm\" (UID: \"9115e656-a381-4ecd-b0e7-95927c882f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.021418 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/fef7b17c-47aa-41ef-83b0-753aba1cac55-ready\") pod \"cni-sysctl-allowlist-ds-fzmlp\" (UID: \"fef7b17c-47aa-41ef-83b0-753aba1cac55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.021566 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/038fa3f0-7095-4011-a6f7-88f53721bdaa-config\") pod \"kube-apiserver-operator-766d6c64bb-hb9c8\" (UID: \"038fa3f0-7095-4011-a6f7-88f53721bdaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.021733 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmqht\" (UniqueName: \"kubernetes.io/projected/a1aed041-7391-4d7e-9ed3-b54294438e6f-kube-api-access-lmqht\") pod \"package-server-manager-789f6589d5-lv7rq\" (UID: \"a1aed041-7391-4d7e-9ed3-b54294438e6f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.022550 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0718633a-1124-4481-8f1a-002784b41ea8-certs\") pod \"machine-config-server-rlmgs\" (UID: \"0718633a-1124-4481-8f1a-002784b41ea8\") " pod="openshift-machine-config-operator/machine-config-server-rlmgs" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.022663 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd2d4f0-43da-4722-b619-ba5b470d3d16-config\") pod \"service-ca-operator-777779d784-ptbvt\" (UID: \"1cd2d4f0-43da-4722-b619-ba5b470d3d16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.022750 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zffq\" (UniqueName: \"kubernetes.io/projected/0aacdf8d-27bb-447d-97f4-03e459b5e8e2-kube-api-access-9zffq\") pod \"catalog-operator-68c6474976-brdj6\" (UID: \"0aacdf8d-27bb-447d-97f4-03e459b5e8e2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.022568 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d8bc0ea0-46f4-4c4b-b008-394713ae0863-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7sm9\" (UID: \"d8bc0ea0-46f4-4c4b-b008-394713ae0863\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7sm9" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.023162 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f39d400-c199-46b0-a15c-76038a8c5331-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c6h5g\" (UID: \"8f39d400-c199-46b0-a15c-76038a8c5331\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.023621 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2661d5fe-77bd-44ac-a136-5362fae787f8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k9j8h\" (UID: \"2661d5fe-77bd-44ac-a136-5362fae787f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.024184 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr5d4\" (UniqueName: \"kubernetes.io/projected/93b8881a-ddc3-4801-a206-27116eba7c50-kube-api-access-hr5d4\") pod \"machine-config-operator-74547568cd-9kmm4\" (UID: \"93b8881a-ddc3-4801-a206-27116eba7c50\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.024385 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj8fr\" (UniqueName: \"kubernetes.io/projected/5d4dcd04-42ad-44e1-99cd-26ca78b8fa4c-kube-api-access-pj8fr\") pod \"multus-admission-controller-857f4d67dd-8jdp4\" (UID: \"5d4dcd04-42ad-44e1-99cd-26ca78b8fa4c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8jdp4" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.024487 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c51ed869-7936-40f3-963f-5744bbf20a71-metrics-tls\") pod \"dns-operator-744455d44c-rgzmr\" (UID: \"c51ed869-7936-40f3-963f-5744bbf20a71\") " pod="openshift-dns-operator/dns-operator-744455d44c-rgzmr" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.024665 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xv6c\" (UniqueName: \"kubernetes.io/projected/89b489d0-ffe2-418b-a4f8-4e693bb9cf63-kube-api-access-6xv6c\") pod \"csi-hostpathplugin-kc8q5\" (UID: \"89b489d0-ffe2-418b-a4f8-4e693bb9cf63\") " pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.025585 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/038fa3f0-7095-4011-a6f7-88f53721bdaa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hb9c8\" (UID: \"038fa3f0-7095-4011-a6f7-88f53721bdaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.026467 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zxfh\" (UniqueName: \"kubernetes.io/projected/8f39d400-c199-46b0-a15c-76038a8c5331-kube-api-access-5zxfh\") pod \"olm-operator-6b444d44fb-c6h5g\" (UID: \"8f39d400-c199-46b0-a15c-76038a8c5331\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.029321 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s58g7\" (UniqueName: \"kubernetes.io/projected/0817083f-3358-4e5a-ab54-be291b5d20d9-kube-api-access-s58g7\") pod \"dns-default-w9rkq\" (UID: \"0817083f-3358-4e5a-ab54-be291b5d20d9\") " pod="openshift-dns/dns-default-w9rkq" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.030781 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.034522 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zqg9\" (UniqueName: \"kubernetes.io/projected/aa04c3ab-1fb1-45e2-b236-64364a2112ef-kube-api-access-4zqg9\") pod \"ingress-operator-5b745b69d9-8ctp5\" (UID: \"aa04c3ab-1fb1-45e2-b236-64364a2112ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.035706 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa04c3ab-1fb1-45e2-b236-64364a2112ef-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8ctp5\" (UID: \"aa04c3ab-1fb1-45e2-b236-64364a2112ef\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.039583 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.064863 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lfmk9" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.075980 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp88w\" (UniqueName: \"kubernetes.io/projected/1cd2d4f0-43da-4722-b619-ba5b470d3d16-kube-api-access-cp88w\") pod \"service-ca-operator-777779d784-ptbvt\" (UID: \"1cd2d4f0-43da-4722-b619-ba5b470d3d16\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.090202 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.095057 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4jf6\" (UniqueName: \"kubernetes.io/projected/d43a2987-8474-4e12-bf30-0540d082c40a-kube-api-access-m4jf6\") pod \"collect-profiles-29485935-6l6fj\" (UID: \"d43a2987-8474-4e12-bf30-0540d082c40a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.100040 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.105332 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:31 crc kubenswrapper[4860]: E0123 08:15:31.105477 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:31.605454771 +0000 UTC m=+38.233504956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.105711 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:31 crc kubenswrapper[4860]: E0123 08:15:31.105990 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:31.605979264 +0000 UTC m=+38.234029449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.111533 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.117660 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzkdc\" (UniqueName: \"kubernetes.io/projected/d8bc0ea0-46f4-4c4b-b008-394713ae0863-kube-api-access-kzkdc\") pod \"control-plane-machine-set-operator-78cbb6b69f-q7sm9\" (UID: \"d8bc0ea0-46f4-4c4b-b008-394713ae0863\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7sm9" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.118963 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-w9rkq" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.153879 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twf2z\" (UniqueName: \"kubernetes.io/projected/b880fa46-0915-4446-b429-94148624a92d-kube-api-access-twf2z\") pod \"ingress-canary-dr4bl\" (UID: \"b880fa46-0915-4446-b429-94148624a92d\") " pod="openshift-ingress-canary/ingress-canary-dr4bl" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.160762 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.169719 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rgzmr" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.175426 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.181573 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.182792 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtpht\" (UniqueName: \"kubernetes.io/projected/fef7b17c-47aa-41ef-83b0-753aba1cac55-kube-api-access-wtpht\") pod \"cni-sysctl-allowlist-ds-fzmlp\" (UID: \"fef7b17c-47aa-41ef-83b0-753aba1cac55\") " pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.206639 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:31 crc kubenswrapper[4860]: E0123 08:15:31.206770 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:31.706752209 +0000 UTC m=+38.334802394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.206965 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r5vb\" (UniqueName: \"kubernetes.io/projected/2661d5fe-77bd-44ac-a136-5362fae787f8-kube-api-access-5r5vb\") pod \"marketplace-operator-79b997595-k9j8h\" (UID: \"2661d5fe-77bd-44ac-a136-5362fae787f8\") " pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.207110 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:31 crc kubenswrapper[4860]: E0123 08:15:31.207469 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:31.707453136 +0000 UTC m=+38.335503321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.229596 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rg7p\" (UniqueName: \"kubernetes.io/projected/0718633a-1124-4481-8f1a-002784b41ea8-kube-api-access-2rg7p\") pod \"machine-config-server-rlmgs\" (UID: \"0718633a-1124-4481-8f1a-002784b41ea8\") " pod="openshift-machine-config-operator/machine-config-server-rlmgs" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.255674 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.274166 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.289343 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8jdp4" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.308191 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:31 crc kubenswrapper[4860]: E0123 08:15:31.308532 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:31.808495459 +0000 UTC m=+38.436545674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.308594 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:31 crc kubenswrapper[4860]: E0123 08:15:31.308999 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:31.808983271 +0000 UTC m=+38.437033466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.311127 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.325414 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qz4h\" (UniqueName: \"kubernetes.io/projected/9d48bd31-b86d-4b22-aa0c-278425b2dbb6-kube-api-access-7qz4h\") pod \"router-default-5444994796-dkzf5\" (UID: \"9d48bd31-b86d-4b22-aa0c-278425b2dbb6\") " pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.325892 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/038fa3f0-7095-4011-a6f7-88f53721bdaa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hb9c8\" (UID: \"038fa3f0-7095-4011-a6f7-88f53721bdaa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.365715 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dr4bl" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.379749 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7sm9" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.409634 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:31 crc kubenswrapper[4860]: E0123 08:15:31.410234 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:31.910212898 +0000 UTC m=+38.538263093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.429228 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.437947 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.445369 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rlmgs" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.453670 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.512284 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:31 crc kubenswrapper[4860]: E0123 08:15:31.512660 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:32.012642625 +0000 UTC m=+38.640692900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.598694 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" event={"ID":"602a7467-b6cc-41e4-acd2-22f69df313e9","Type":"ContainerStarted","Data":"c6ae4855160ff18506bb7c5ab9c398782d09327546066df6d428948b2436f0a7"} Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.614793 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:31 crc kubenswrapper[4860]: E0123 08:15:31.614961 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:32.114941469 +0000 UTC m=+38.742991654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.615100 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:31 crc kubenswrapper[4860]: E0123 08:15:31.615382 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:32.1153747 +0000 UTC m=+38.743424885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.627680 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hzk6q"] Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.649629 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl"] Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.669695 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk"] Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.703311 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-xwk76"] Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.715975 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:31 crc kubenswrapper[4860]: E0123 08:15:31.716176 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:32.216149826 +0000 UTC m=+38.844200011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.716322 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:31 crc kubenswrapper[4860]: E0123 08:15:31.716627 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:32.216619667 +0000 UTC m=+38.844669852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.744711 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.817661 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:31 crc kubenswrapper[4860]: E0123 08:15:31.817807 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:32.317782322 +0000 UTC m=+38.945832507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.817998 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:31 crc kubenswrapper[4860]: E0123 08:15:31.818288 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:32.318280965 +0000 UTC m=+38.946331150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:31 crc kubenswrapper[4860]: W0123 08:15:31.888407 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod094b44a1_e9bb_4b69_94ad_afa0f9f8179e.slice/crio-e1dac858189808bc8fd289c9d6b5cb5d29d2e49d58886ebba82c12fbeb48efa1 WatchSource:0}: Error finding container e1dac858189808bc8fd289c9d6b5cb5d29d2e49d58886ebba82c12fbeb48efa1: Status 404 returned error can't find the container with id e1dac858189808bc8fd289c9d6b5cb5d29d2e49d58886ebba82c12fbeb48efa1 Jan 23 08:15:31 crc kubenswrapper[4860]: I0123 08:15:31.918861 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:31 crc kubenswrapper[4860]: E0123 08:15:31.919744 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:32.419720427 +0000 UTC m=+39.047770602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.023532 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:32 crc kubenswrapper[4860]: E0123 08:15:32.024087 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:32.524045152 +0000 UTC m=+39.152095337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.065060 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq"] Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.114127 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm"] Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.124737 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:32 crc kubenswrapper[4860]: E0123 08:15:32.124924 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:32.62490006 +0000 UTC m=+39.252950245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.124962 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:32 crc kubenswrapper[4860]: E0123 08:15:32.126211 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:32.626200653 +0000 UTC m=+39.254250838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.226229 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:32 crc kubenswrapper[4860]: E0123 08:15:32.226578 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:32.726562587 +0000 UTC m=+39.354612772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.328335 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:32 crc kubenswrapper[4860]: E0123 08:15:32.328982 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:32.828969363 +0000 UTC m=+39.457019548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.408934 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g"] Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.430818 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:32 crc kubenswrapper[4860]: E0123 08:15:32.431269 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:32.931242526 +0000 UTC m=+39.559292711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.532573 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:32 crc kubenswrapper[4860]: E0123 08:15:32.532914 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:33.032899984 +0000 UTC m=+39.660950169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.623206 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" event={"ID":"7f3fedc2-c90b-4d6d-a570-30a2662feaf1","Type":"ContainerStarted","Data":"43ffc1603df22046409c5f71ed99509ab33222c5925b34a2e60afe30dfa736f5"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.626381 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nltcq" event={"ID":"966345d1-b077-4fab-89b6-ca2830cbe04a","Type":"ContainerStarted","Data":"ef9829b2d0d279c9adbb15de10b699b5569ca1e761584235db6dfded98f34cb2"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.628773 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" event={"ID":"5d492e4a-90c9-448c-b02c-e7b286055cd4","Type":"ContainerStarted","Data":"feded2c7f5571833d60957a1b12878023e5d3baa20afa9773db817dc5fe8d814"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.630420 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6gqph" event={"ID":"8149687b-dd22-4d3d-998c-efd2692db14b","Type":"ContainerStarted","Data":"9bcff4fbf0b8581a8d788f3ab57bd4e14193b4726df8b31f68d255a867ad4872"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.633114 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:32 crc kubenswrapper[4860]: E0123 08:15:32.633524 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:33.133498186 +0000 UTC m=+39.761548371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.634737 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" event={"ID":"25973908-6ad1-4e3b-a493-de9f5baef4e8","Type":"ContainerStarted","Data":"17b8d4f72a7f70df8e2791eb2a33508a4e3b93e202a72ea6aadf9619ab53e21a"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.637563 4860 generic.go:334] "Generic (PLEG): container finished" podID="aa868339-9c74-409d-abba-2950409b3918" containerID="89ac1f0b32bc134b37e6146f1c9247cba9b0a5ba57a6885574cb2976818a48f8" exitCode=0 Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.637647 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" event={"ID":"aa868339-9c74-409d-abba-2950409b3918","Type":"ContainerDied","Data":"89ac1f0b32bc134b37e6146f1c9247cba9b0a5ba57a6885574cb2976818a48f8"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.639343 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" event={"ID":"3ad4c125-6200-4d28-aeb0-8a0390508c91","Type":"ContainerStarted","Data":"7e22b71e54d38dcb1540f64b14ab920600d8cbc3143d60e39016e319066aa32b"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.640817 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" event={"ID":"8f39d400-c199-46b0-a15c-76038a8c5331","Type":"ContainerStarted","Data":"ec702e7c5bada48814bb2254f5cdd1655fd8918d31c4d6c17ad6cbc3e74fe9cc"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.643736 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq" event={"ID":"a1aed041-7391-4d7e-9ed3-b54294438e6f","Type":"ContainerStarted","Data":"06b5ea337b6d6b8bdef5adea103794c47890fd110fd19d395f76eb4b4852673b"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.645574 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" event={"ID":"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2","Type":"ContainerStarted","Data":"add5c11796b070a656d2afb9e74285fba66aefae379a375aedc139ad9be4f8e6"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.645884 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.650242 4860 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hdn58 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.650554 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" podUID="4a70f78c-30ad-42bb-a8d6-c7ef144db4f2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.652562 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk" event={"ID":"094b44a1-e9bb-4b69-94ad-afa0f9f8179e","Type":"ContainerStarted","Data":"e1dac858189808bc8fd289c9d6b5cb5d29d2e49d58886ebba82c12fbeb48efa1"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.657293 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" event={"ID":"fef7b17c-47aa-41ef-83b0-753aba1cac55","Type":"ContainerStarted","Data":"f0d054fd4a78303b4b14ae5fd859c36afe0c1d0a91228a85fdf1eb20bdf74922"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.669000 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" podStartSLOduration=18.668979818 podStartE2EDuration="18.668979818s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:32.667639784 +0000 UTC m=+39.295689969" watchObservedRunningTime="2026-01-23 08:15:32.668979818 +0000 UTC m=+39.297030003" Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.671940 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-xwk76" event={"ID":"a10289e9-f5eb-44a9-a95a-6c79fd3e4461","Type":"ContainerStarted","Data":"87e4b12d75a2a41d1ffb91fae48daafbd8457a0eca83df41da1e4369914c0011"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.705832 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rlmgs" event={"ID":"0718633a-1124-4481-8f1a-002784b41ea8","Type":"ContainerStarted","Data":"a05e357bd5205e4b89312d0fe659ecaa10748188733957e6e93acd56fb565476"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.718132 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" event={"ID":"12330295-b7da-48bc-ad66-f382f3eedace","Type":"ContainerStarted","Data":"9c9d42efe809a1fa8b09fc79ab565dd725571d947c604935bd528ca01a77dc1c"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.721942 4860 generic.go:334] "Generic (PLEG): container finished" podID="c2b9037e-4bea-4f39-8822-8c4d9f9c3b08" containerID="785287911d9c6ca6d500fc07ccc7f4cc467715a02bfe89cbee4153c7d6f9b2cb" exitCode=0 Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.722120 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" event={"ID":"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08","Type":"ContainerDied","Data":"785287911d9c6ca6d500fc07ccc7f4cc467715a02bfe89cbee4153c7d6f9b2cb"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.735768 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:32 crc kubenswrapper[4860]: E0123 08:15:32.736122 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:33.236110197 +0000 UTC m=+39.864160382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.739088 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vtj8m" event={"ID":"72589342-3cb8-4e43-bad0-f1726f70d77a","Type":"ContainerStarted","Data":"3ebd9b73091856d95b96e4e792e29d370aa9d4b4654914f9d99e532eff9c3568"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.756180 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5" event={"ID":"9c1d4b4e-bb32-40cd-88ec-c884a5fa88f9","Type":"ContainerStarted","Data":"721866155aee660d1780b988c4f45aa64fd8de21d9b231776e2392489024e615"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.759862 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-rpmlf" podStartSLOduration=18.759844894 podStartE2EDuration="18.759844894s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:32.73543071 +0000 UTC m=+39.363480895" watchObservedRunningTime="2026-01-23 08:15:32.759844894 +0000 UTC m=+39.387895079" Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.765168 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dkzf5" event={"ID":"9d48bd31-b86d-4b22-aa0c-278425b2dbb6","Type":"ContainerStarted","Data":"b31df42fde7e6821cc0b3d9f0e06fc644d2f31f6451205f7f64f06682068fe3a"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.767411 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl" event={"ID":"1e9f9de0-0f93-4b24-8808-f322cea95b38","Type":"ContainerStarted","Data":"304cdb34bbe30ac2e088f492454543bc2a7b83b9016ba681aa0d54e5bba07491"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.769124 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm" event={"ID":"9115e656-a381-4ecd-b0e7-95927c882f1b","Type":"ContainerStarted","Data":"a630d66ec8b3e9582d0cdb439c3979658338c3084fbb774edca06ad8ecbee595"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.770434 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw" event={"ID":"1842f8a2-d295-4d71-be98-450d5abc2c08","Type":"ContainerStarted","Data":"f4e28d80baade9652124788fd525d97aacf4f42f5c7bba009b195e7772f50786"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.779247 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fb64k" event={"ID":"d22a61a3-fe8c-43f0-a6fc-16e5da78cd14","Type":"ContainerStarted","Data":"f17d3812354d2f60d2e61eed180ad7445a4ccbbc1ba55345cf64607735c11d1f"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.779711 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fb64k" Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.786504 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" event={"ID":"686eac92-c672-4c4d-bf80-8e47a557a52c","Type":"ContainerStarted","Data":"5420340c9cb6bdacc3cb3c6b7e143326988b00f441e7ca6613332ba5e9205eb6"} Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.786960 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.789706 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-vtj8m" podStartSLOduration=18.789690175 podStartE2EDuration="18.789690175s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:32.787847439 +0000 UTC m=+39.415897624" watchObservedRunningTime="2026-01-23 08:15:32.789690175 +0000 UTC m=+39.417740360" Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.797590 4860 patch_prober.go:28] interesting pod/console-operator-58897d9998-fb64k container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.797646 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fb64k" podUID="d22a61a3-fe8c-43f0-a6fc-16e5da78cd14" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.827754 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-w2fd5" podStartSLOduration=18.827734173 podStartE2EDuration="18.827734173s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:32.823567858 +0000 UTC m=+39.451618043" watchObservedRunningTime="2026-01-23 08:15:32.827734173 +0000 UTC m=+39.455784358" Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.842821 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:32 crc kubenswrapper[4860]: E0123 08:15:32.844590 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:33.344575316 +0000 UTC m=+39.972625501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:32 crc kubenswrapper[4860]: W0123 08:15:32.846197 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93b8881a_ddc3_4801_a206_27116eba7c50.slice/crio-fa2be323ed4b8c45cd75e8b8403586adc2b6e68af910cd37723e29ee67da21a6 WatchSource:0}: Error finding container fa2be323ed4b8c45cd75e8b8403586adc2b6e68af910cd37723e29ee67da21a6: Status 404 returned error can't find the container with id fa2be323ed4b8c45cd75e8b8403586adc2b6e68af910cd37723e29ee67da21a6 Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.849945 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6"] Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.860472 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4"] Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.866184 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" podStartSLOduration=18.866167729 podStartE2EDuration="18.866167729s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:32.842573576 +0000 UTC m=+39.470623761" watchObservedRunningTime="2026-01-23 08:15:32.866167729 +0000 UTC m=+39.494217914" Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.866898 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-w9rkq"] Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.901906 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8jdp4"] Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.903730 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z5dfl" podStartSLOduration=18.903711294 podStartE2EDuration="18.903711294s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:32.87017741 +0000 UTC m=+39.498227595" watchObservedRunningTime="2026-01-23 08:15:32.903711294 +0000 UTC m=+39.531761479" Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.910231 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7sm9"] Jan 23 08:15:32 crc kubenswrapper[4860]: W0123 08:15:32.911520 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d4dcd04_42ad_44e1_99cd_26ca78b8fa4c.slice/crio-cbfceeea1a8d461df10a5e25ad76b1aa58e38fce13723cf8b28d28fcd6e4a584 WatchSource:0}: Error finding container cbfceeea1a8d461df10a5e25ad76b1aa58e38fce13723cf8b28d28fcd6e4a584: Status 404 returned error can't find the container with id cbfceeea1a8d461df10a5e25ad76b1aa58e38fce13723cf8b28d28fcd6e4a584 Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.932725 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj"] Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.935636 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6sllw" podStartSLOduration=18.935615867 podStartE2EDuration="18.935615867s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:32.903993111 +0000 UTC m=+39.532043296" watchObservedRunningTime="2026-01-23 08:15:32.935615867 +0000 UTC m=+39.563666052" Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.938782 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-fb64k" podStartSLOduration=18.938762976 podStartE2EDuration="18.938762976s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:32.932643662 +0000 UTC m=+39.560693847" watchObservedRunningTime="2026-01-23 08:15:32.938762976 +0000 UTC m=+39.566813191" Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.945788 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:32 crc kubenswrapper[4860]: E0123 08:15:32.949390 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:33.449374912 +0000 UTC m=+40.077425097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:32 crc kubenswrapper[4860]: I0123 08:15:32.967406 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:32.998989 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rgzmr"] Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.028597 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lfmk9"] Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.031691 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6"] Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.042278 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dr4bl"] Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.047121 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:33 crc kubenswrapper[4860]: E0123 08:15:33.047504 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:33.547489482 +0000 UTC m=+40.175539667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.056008 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj"] Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.080282 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k9j8h"] Jan 23 08:15:33 crc kubenswrapper[4860]: W0123 08:15:33.080370 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc51ed869_7936_40f3_963f_5744bbf20a71.slice/crio-31f12014d35f6d48d2b1f718ad70b10822ff270d31c786bdfa3cd0bffef094db WatchSource:0}: Error finding container 31f12014d35f6d48d2b1f718ad70b10822ff270d31c786bdfa3cd0bffef094db: Status 404 returned error can't find the container with id 31f12014d35f6d48d2b1f718ad70b10822ff270d31c786bdfa3cd0bffef094db Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.103546 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5"] Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.111425 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kc8q5"] Jan 23 08:15:33 crc kubenswrapper[4860]: W0123 08:15:33.128876 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbfc04c3_e456_45d8_9787_3eccb3a8c786.slice/crio-1c146026f0e61c85c8c7ccc4479558ac8e0209b9a5dca9eb28f81b5bcdb0d6fa WatchSource:0}: Error finding container 1c146026f0e61c85c8c7ccc4479558ac8e0209b9a5dca9eb28f81b5bcdb0d6fa: Status 404 returned error can't find the container with id 1c146026f0e61c85c8c7ccc4479558ac8e0209b9a5dca9eb28f81b5bcdb0d6fa Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.133525 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt"] Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.150177 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8"] Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.150351 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:33 crc kubenswrapper[4860]: E0123 08:15:33.150678 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:33.650665957 +0000 UTC m=+40.278716142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:33 crc kubenswrapper[4860]: W0123 08:15:33.223102 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd43a2987_8474_4e12_bf30_0540d082c40a.slice/crio-b00de6f24089ef9453393997dab4ff369ee80be5da0395915b82653fd587475b WatchSource:0}: Error finding container b00de6f24089ef9453393997dab4ff369ee80be5da0395915b82653fd587475b: Status 404 returned error can't find the container with id b00de6f24089ef9453393997dab4ff369ee80be5da0395915b82653fd587475b Jan 23 08:15:33 crc kubenswrapper[4860]: W0123 08:15:33.225490 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb880fa46_0915_4446_b429_94148624a92d.slice/crio-ebbd58c15547366f82180084f516600f12be241ae1ab7449720e59fb6751a40f WatchSource:0}: Error finding container ebbd58c15547366f82180084f516600f12be241ae1ab7449720e59fb6751a40f: Status 404 returned error can't find the container with id ebbd58c15547366f82180084f516600f12be241ae1ab7449720e59fb6751a40f Jan 23 08:15:33 crc kubenswrapper[4860]: W0123 08:15:33.247899 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89b489d0_ffe2_418b_a4f8_4e693bb9cf63.slice/crio-75229ab191bc016bf6cadd6c52f645c73958e408703048c782a2c807f7fefe48 WatchSource:0}: Error finding container 75229ab191bc016bf6cadd6c52f645c73958e408703048c782a2c807f7fefe48: Status 404 returned error can't find the container with id 75229ab191bc016bf6cadd6c52f645c73958e408703048c782a2c807f7fefe48 Jan 23 08:15:33 crc kubenswrapper[4860]: W0123 08:15:33.250416 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cd2d4f0_43da_4722_b619_ba5b470d3d16.slice/crio-324d5b985bff914eb7cae008b6df4ae2ca764cfcb3efec7c38e8da7dca59b1ba WatchSource:0}: Error finding container 324d5b985bff914eb7cae008b6df4ae2ca764cfcb3efec7c38e8da7dca59b1ba: Status 404 returned error can't find the container with id 324d5b985bff914eb7cae008b6df4ae2ca764cfcb3efec7c38e8da7dca59b1ba Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.251407 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:33 crc kubenswrapper[4860]: E0123 08:15:33.251723 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:33.751707919 +0000 UTC m=+40.379758104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:33 crc kubenswrapper[4860]: W0123 08:15:33.252162 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa04c3ab_1fb1_45e2_b236_64364a2112ef.slice/crio-34efbc64dfd364a2001be6761f9ec9167b7bb7f30f459031bebbc8137fbdbe3a WatchSource:0}: Error finding container 34efbc64dfd364a2001be6761f9ec9167b7bb7f30f459031bebbc8137fbdbe3a: Status 404 returned error can't find the container with id 34efbc64dfd364a2001be6761f9ec9167b7bb7f30f459031bebbc8137fbdbe3a Jan 23 08:15:33 crc kubenswrapper[4860]: W0123 08:15:33.268799 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod038fa3f0_7095_4011_a6f7_88f53721bdaa.slice/crio-677d4efb83f50becb423f9506f3fce0271ab83af4a7246fc890f669662816598 WatchSource:0}: Error finding container 677d4efb83f50becb423f9506f3fce0271ab83af4a7246fc890f669662816598: Status 404 returned error can't find the container with id 677d4efb83f50becb423f9506f3fce0271ab83af4a7246fc890f669662816598 Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.356000 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:33 crc kubenswrapper[4860]: E0123 08:15:33.356327 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:33.856312661 +0000 UTC m=+40.484362836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.456465 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:33 crc kubenswrapper[4860]: E0123 08:15:33.456659 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:33.956611365 +0000 UTC m=+40.584661560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.456875 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:33 crc kubenswrapper[4860]: E0123 08:15:33.457186 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:33.957175579 +0000 UTC m=+40.585225764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.558436 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.558737 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs\") pod \"network-metrics-daemon-vtsqg\" (UID: \"3416422c-d1ef-463b-a846-11d6ea9715c3\") " pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:33 crc kubenswrapper[4860]: E0123 08:15:33.558792 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:34.058761975 +0000 UTC m=+40.686812160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.567200 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3416422c-d1ef-463b-a846-11d6ea9715c3-metrics-certs\") pod \"network-metrics-daemon-vtsqg\" (UID: \"3416422c-d1ef-463b-a846-11d6ea9715c3\") " pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.589444 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vtsqg" Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.660140 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:33 crc kubenswrapper[4860]: E0123 08:15:33.660521 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:34.160504894 +0000 UTC m=+40.788555079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.761872 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:33 crc kubenswrapper[4860]: E0123 08:15:33.762195 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:34.262177703 +0000 UTC m=+40.890227888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.843314 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" event={"ID":"d43a2987-8474-4e12-bf30-0540d082c40a","Type":"ContainerStarted","Data":"b00de6f24089ef9453393997dab4ff369ee80be5da0395915b82653fd587475b"} Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.847087 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6gqph" event={"ID":"8149687b-dd22-4d3d-998c-efd2692db14b","Type":"ContainerStarted","Data":"059e91555c02980c74f369395ff24560793e56feaed7928bf41818d11e2478fd"} Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.850193 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-xwk76" event={"ID":"a10289e9-f5eb-44a9-a95a-6c79fd3e4461","Type":"ContainerStarted","Data":"6745066aacecb343f74875e6b3148d37ee1b96e262fb601d01ded1e8e43898d3"} Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.862351 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk" event={"ID":"094b44a1-e9bb-4b69-94ad-afa0f9f8179e","Type":"ContainerStarted","Data":"0babc4c021cc07a3e89e80896765cd7d14842e0d6a22a24d5c433eeeed801bd4"} Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.864102 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:33 crc kubenswrapper[4860]: E0123 08:15:33.864484 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:34.364468716 +0000 UTC m=+40.992518901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.870134 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt" event={"ID":"1cd2d4f0-43da-4722-b619-ba5b470d3d16","Type":"ContainerStarted","Data":"324d5b985bff914eb7cae008b6df4ae2ca764cfcb3efec7c38e8da7dca59b1ba"} Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.877925 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" event={"ID":"8f39d400-c199-46b0-a15c-76038a8c5331","Type":"ContainerStarted","Data":"1f99238e8188fee940579bbc7eabf5f05321f691ef550f4c495b26ac3cad3647"} Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.878875 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.884842 4860 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-c6h5g container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.884885 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" podUID="8f39d400-c199-46b0-a15c-76038a8c5331" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.886467 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" event={"ID":"aa04c3ab-1fb1-45e2-b236-64364a2112ef","Type":"ContainerStarted","Data":"34efbc64dfd364a2001be6761f9ec9167b7bb7f30f459031bebbc8137fbdbe3a"} Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.891545 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" event={"ID":"89b489d0-ffe2-418b-a4f8-4e693bb9cf63","Type":"ContainerStarted","Data":"75229ab191bc016bf6cadd6c52f645c73958e408703048c782a2c807f7fefe48"} Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.921585 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm" event={"ID":"9115e656-a381-4ecd-b0e7-95927c882f1b","Type":"ContainerStarted","Data":"3a7f3f239cabda5749b589b8f6cd8ac93da49cde2e75416e5e4dc11d7d458edb"} Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.968704 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:33 crc kubenswrapper[4860]: E0123 08:15:33.969049 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:34.469000876 +0000 UTC m=+41.097051111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:33 crc kubenswrapper[4860]: I0123 08:15:33.971141 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" event={"ID":"c2b9037e-4bea-4f39-8822-8c4d9f9c3b08","Type":"ContainerStarted","Data":"243434478be0cfc38a93cb740505bdf99296c3ccb332799b98871772bba136e9"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.044537 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8" event={"ID":"038fa3f0-7095-4011-a6f7-88f53721bdaa","Type":"ContainerStarted","Data":"677d4efb83f50becb423f9506f3fce0271ab83af4a7246fc890f669662816598"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.068198 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" event={"ID":"fef7b17c-47aa-41ef-83b0-753aba1cac55","Type":"ContainerStarted","Data":"f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.068249 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.074568 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:34 crc kubenswrapper[4860]: E0123 08:15:34.076652 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:34.576639645 +0000 UTC m=+41.204689830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.115470 4860 generic.go:334] "Generic (PLEG): container finished" podID="3615b8d9-7529-46e3-9320-1e8a70ced9a5" containerID="cdfa9892a55ef137116755c65941008a44f25a2cb00d0a8c59d76566e6c21529" exitCode=0 Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.115561 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" event={"ID":"3615b8d9-7529-46e3-9320-1e8a70ced9a5","Type":"ContainerDied","Data":"cdfa9892a55ef137116755c65941008a44f25a2cb00d0a8c59d76566e6c21529"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.186434 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl" event={"ID":"1e9f9de0-0f93-4b24-8808-f322cea95b38","Type":"ContainerStarted","Data":"d74d497043538d583a80cbd9418e163eae2b542d1e9a7942127aceed001d320f"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.200809 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.223326 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:15:34 crc kubenswrapper[4860]: E0123 08:15:34.224206 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:34.724177026 +0000 UTC m=+41.352227211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.238441 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7sm9" event={"ID":"d8bc0ea0-46f4-4c4b-b008-394713ae0863","Type":"ContainerStarted","Data":"3f76ebbc4844207346bd11645a94fca7d0d7356cc80373f77acb011d3f0d434b"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.263345 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" event={"ID":"93b8881a-ddc3-4801-a206-27116eba7c50","Type":"ContainerStarted","Data":"234e09fceddbd5579ad88ad32f6a2634577876a5171d308696d44fd0f0ec4326"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.264099 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" event={"ID":"93b8881a-ddc3-4801-a206-27116eba7c50","Type":"ContainerStarted","Data":"fa2be323ed4b8c45cd75e8b8403586adc2b6e68af910cd37723e29ee67da21a6"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.283178 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dkzf5" event={"ID":"9d48bd31-b86d-4b22-aa0c-278425b2dbb6","Type":"ContainerStarted","Data":"186b28fbb7d223b1f5d455161ba3cbc676426318691e6eb7db3cfadc6b324146"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.313984 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8jdp4" event={"ID":"5d4dcd04-42ad-44e1-99cd-26ca78b8fa4c","Type":"ContainerStarted","Data":"cbfceeea1a8d461df10a5e25ad76b1aa58e38fce13723cf8b28d28fcd6e4a584"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.326528 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.328302 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w9rkq" event={"ID":"0817083f-3358-4e5a-ab54-be291b5d20d9","Type":"ContainerStarted","Data":"e6fba3ac31f174df865670a5c7fca74954d755db352e7e0265f9f98af61f0f71"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.328340 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w9rkq" event={"ID":"0817083f-3358-4e5a-ab54-be291b5d20d9","Type":"ContainerStarted","Data":"51552d19773ff7bd536e6f951ea52a7f51f0d1400e2e63e2d1c403f704d986c3"} Jan 23 08:15:34 crc kubenswrapper[4860]: E0123 08:15:34.328558 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:34.828542272 +0000 UTC m=+41.456592457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.344468 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" event={"ID":"515bd73b-7a1b-414c-b189-25042a6049a3","Type":"ContainerStarted","Data":"cd0bfd0174a10145e97378db6c598598d36654ed564408eac024bc5d52207da3"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.344767 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" event={"ID":"515bd73b-7a1b-414c-b189-25042a6049a3","Type":"ContainerStarted","Data":"328266c62f2ae88b537e1068238eb28c9d3c856c3531517b3d80b80d5db96205"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.345313 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.349976 4860 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ld8cj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" start-of-body= Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.352503 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" podUID="515bd73b-7a1b-414c-b189-25042a6049a3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.357649 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rlmgs" event={"ID":"0718633a-1124-4481-8f1a-002784b41ea8","Type":"ContainerStarted","Data":"b57a39027cfc40b5197d56c03ea6c3c42a6fb8ff33c3389818adb68ddba10707"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.371524 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq" event={"ID":"a1aed041-7391-4d7e-9ed3-b54294438e6f","Type":"ContainerStarted","Data":"b56c8b4ac48065d93b8d192ee6f0972460f65c4cab42813bd9d08cbc14bcbdc5"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.372690 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rgzmr" event={"ID":"c51ed869-7936-40f3-963f-5744bbf20a71","Type":"ContainerStarted","Data":"31f12014d35f6d48d2b1f718ad70b10822ff270d31c786bdfa3cd0bffef094db"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.373449 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" event={"ID":"2661d5fe-77bd-44ac-a136-5362fae787f8","Type":"ContainerStarted","Data":"662f4609e7210b4b7c9add8d9e3d540659b5e7ac06d4402007f94eb84076ad12"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.380960 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dr4bl" event={"ID":"b880fa46-0915-4446-b429-94148624a92d","Type":"ContainerStarted","Data":"ebbd58c15547366f82180084f516600f12be241ae1ab7449720e59fb6751a40f"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.392672 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" event={"ID":"0aacdf8d-27bb-447d-97f4-03e459b5e8e2","Type":"ContainerStarted","Data":"35a4cbf7ee033f1e25edd1850fd424eff9176f425c506875759600114731ce50"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.393639 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.396042 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6" event={"ID":"f5dfda28-7a8d-45a0-81e0-60fe859754bd","Type":"ContainerStarted","Data":"59cbc70ded4f89739394740ebe91a7dde3cf190e689ce7bd200bb590b27f1ebc"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.396089 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vtsqg"] Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.405908 4860 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-brdj6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.44:8443/healthz\": dial tcp 10.217.0.44:8443: connect: connection refused" start-of-body= Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.405951 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" podUID="0aacdf8d-27bb-447d-97f4-03e459b5e8e2" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.44:8443/healthz\": dial tcp 10.217.0.44:8443: connect: connection refused" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.412812 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" event={"ID":"7f3fedc2-c90b-4d6d-a570-30a2662feaf1","Type":"ContainerStarted","Data":"ce98083c6e1a143b76b5fc7be7508434de42cc357b10355dbe602f78b409dc11"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.416673 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" podStartSLOduration=20.416659279 podStartE2EDuration="20.416659279s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:34.41551061 +0000 UTC m=+41.043560795" watchObservedRunningTime="2026-01-23 08:15:34.416659279 +0000 UTC m=+41.044709464" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.421549 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" event={"ID":"25973908-6ad1-4e3b-a493-de9f5baef4e8","Type":"ContainerStarted","Data":"5638df44de1f3a65d5a7f84af92445676e8158bd7fdf08ac0f19fc9d89a4c60f"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.425346 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" event={"ID":"602a7467-b6cc-41e4-acd2-22f69df313e9","Type":"ContainerStarted","Data":"3d13ead05ddeeb279d3996b2eb3d6a91339fa8bdd000e8ce63439ef095f6aea0"} Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.429464 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:34 crc kubenswrapper[4860]: E0123 08:15:34.429574 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:34.929557454 +0000 UTC m=+41.557607639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.430580 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-xwk76" podStartSLOduration=20.430566949 podStartE2EDuration="20.430566949s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:34.429309787 +0000 UTC m=+41.057359972" watchObservedRunningTime="2026-01-23 08:15:34.430566949 +0000 UTC m=+41.058617124" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.431744 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.434139 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lfmk9" event={"ID":"cbfc04c3-e456-45d8-9787-3eccb3a8c786","Type":"ContainerStarted","Data":"1c146026f0e61c85c8c7ccc4479558ac8e0209b9a5dca9eb28f81b5bcdb0d6fa"} Jan 23 08:15:34 crc kubenswrapper[4860]: E0123 08:15:34.435638 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:34.935625156 +0000 UTC m=+41.563675341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.439883 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.439937 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-nltcq" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.442101 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.452257 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.453537 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kdjbl" podStartSLOduration=20.453522767 podStartE2EDuration="20.453522767s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:34.451357842 +0000 UTC m=+41.079408027" watchObservedRunningTime="2026-01-23 08:15:34.453522767 +0000 UTC m=+41.081572952" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.461536 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.471356 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-nltcq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.471424 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nltcq" podUID="966345d1-b077-4fab-89b6-ca2830cbe04a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.506685 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fb64k" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.523559 4860 patch_prober.go:28] interesting pod/router-default-5444994796-dkzf5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:15:34 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 23 08:15:34 crc kubenswrapper[4860]: [+]process-running ok Jan 23 08:15:34 crc kubenswrapper[4860]: healthz check failed Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.523632 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkzf5" podUID="9d48bd31-b86d-4b22-aa0c-278425b2dbb6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.541636 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:34 crc kubenswrapper[4860]: E0123 08:15:34.542097 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:35.042058234 +0000 UTC m=+41.670108419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.542285 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:34 crc kubenswrapper[4860]: E0123 08:15:34.543582 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:35.043573632 +0000 UTC m=+41.671623817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.563681 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-rlmgs" podStartSLOduration=7.563666047 podStartE2EDuration="7.563666047s" podCreationTimestamp="2026-01-23 08:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:34.56253371 +0000 UTC m=+41.190583895" watchObservedRunningTime="2026-01-23 08:15:34.563666047 +0000 UTC m=+41.191716232" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.565503 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5k5dm" podStartSLOduration=20.565496384 podStartE2EDuration="20.565496384s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:34.498440217 +0000 UTC m=+41.126490402" watchObservedRunningTime="2026-01-23 08:15:34.565496384 +0000 UTC m=+41.193546579" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.643269 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:34 crc kubenswrapper[4860]: E0123 08:15:34.644613 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:35.144596074 +0000 UTC m=+41.772646249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.697656 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7sm9" podStartSLOduration=20.697635208 podStartE2EDuration="20.697635208s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:34.610321472 +0000 UTC m=+41.238371657" watchObservedRunningTime="2026-01-23 08:15:34.697635208 +0000 UTC m=+41.325685413" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.698476 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" podStartSLOduration=20.69846975 podStartE2EDuration="20.69846975s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:34.696263514 +0000 UTC m=+41.324313699" watchObservedRunningTime="2026-01-23 08:15:34.69846975 +0000 UTC m=+41.326519935" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.772777 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" podStartSLOduration=7.772752158 podStartE2EDuration="7.772752158s" podCreationTimestamp="2026-01-23 08:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:34.739707127 +0000 UTC m=+41.367757332" watchObservedRunningTime="2026-01-23 08:15:34.772752158 +0000 UTC m=+41.400802343" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.775488 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:34 crc kubenswrapper[4860]: E0123 08:15:34.775742 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:35.275730353 +0000 UTC m=+41.903780538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.800351 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" podStartSLOduration=20.800333362 podStartE2EDuration="20.800333362s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:34.793164211 +0000 UTC m=+41.421214396" watchObservedRunningTime="2026-01-23 08:15:34.800333362 +0000 UTC m=+41.428383537" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.847238 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dkzf5" podStartSLOduration=20.847223212 podStartE2EDuration="20.847223212s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:34.847210192 +0000 UTC m=+41.475260377" watchObservedRunningTime="2026-01-23 08:15:34.847223212 +0000 UTC m=+41.475273397" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.867757 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-nltcq" podStartSLOduration=20.867738268 podStartE2EDuration="20.867738268s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:34.86741783 +0000 UTC m=+41.495468015" watchObservedRunningTime="2026-01-23 08:15:34.867738268 +0000 UTC m=+41.495788453" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.876551 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:34 crc kubenswrapper[4860]: E0123 08:15:34.876939 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:35.376924409 +0000 UTC m=+42.004974594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.914063 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-2g9xr" podStartSLOduration=20.914047143 podStartE2EDuration="20.914047143s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:34.913679703 +0000 UTC m=+41.541729888" watchObservedRunningTime="2026-01-23 08:15:34.914047143 +0000 UTC m=+41.542097328" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.958704 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tqw5v" podStartSLOduration=20.958681857 podStartE2EDuration="20.958681857s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:34.957424495 +0000 UTC m=+41.585474680" watchObservedRunningTime="2026-01-23 08:15:34.958681857 +0000 UTC m=+41.586732042" Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.978004 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:34 crc kubenswrapper[4860]: E0123 08:15:34.978359 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:35.478345281 +0000 UTC m=+42.106395466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:34 crc kubenswrapper[4860]: I0123 08:15:34.979694 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nsbwd" podStartSLOduration=21.979678745 podStartE2EDuration="21.979678745s" podCreationTimestamp="2026-01-23 08:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:34.97871246 +0000 UTC m=+41.606762645" watchObservedRunningTime="2026-01-23 08:15:34.979678745 +0000 UTC m=+41.607728930" Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.016541 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" podStartSLOduration=21.016518411 podStartE2EDuration="21.016518411s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:35.012699675 +0000 UTC m=+41.640749860" watchObservedRunningTime="2026-01-23 08:15:35.016518411 +0000 UTC m=+41.644568616" Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.029671 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hzk6q" podStartSLOduration=21.029655712 podStartE2EDuration="21.029655712s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:35.029444706 +0000 UTC m=+41.657494891" watchObservedRunningTime="2026-01-23 08:15:35.029655712 +0000 UTC m=+41.657705887" Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.078909 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:35 crc kubenswrapper[4860]: E0123 08:15:35.079174 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:35.579145297 +0000 UTC m=+42.207195522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.143996 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" podStartSLOduration=20.143979578 podStartE2EDuration="20.143979578s" podCreationTimestamp="2026-01-23 08:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:35.141064635 +0000 UTC m=+41.769114820" watchObservedRunningTime="2026-01-23 08:15:35.143979578 +0000 UTC m=+41.772029763" Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.180810 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:35 crc kubenswrapper[4860]: E0123 08:15:35.181242 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:35.681230825 +0000 UTC m=+42.309281010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.245097 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-fzmlp"] Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.282291 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:35 crc kubenswrapper[4860]: E0123 08:15:35.282496 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:35.782445312 +0000 UTC m=+42.410495507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.282668 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:35 crc kubenswrapper[4860]: E0123 08:15:35.282923 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:35.782912253 +0000 UTC m=+42.410962438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.383671 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:35 crc kubenswrapper[4860]: E0123 08:15:35.384243 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:35.884220632 +0000 UTC m=+42.512270817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.384347 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:35 crc kubenswrapper[4860]: E0123 08:15:35.384630 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:35.884617653 +0000 UTC m=+42.512667838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.441766 4860 patch_prober.go:28] interesting pod/router-default-5444994796-dkzf5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:15:35 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 23 08:15:35 crc kubenswrapper[4860]: [+]process-running ok Jan 23 08:15:35 crc kubenswrapper[4860]: healthz check failed Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.441829 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkzf5" podUID="9d48bd31-b86d-4b22-aa0c-278425b2dbb6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.485226 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:35 crc kubenswrapper[4860]: E0123 08:15:35.485383 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:35.985342677 +0000 UTC m=+42.613392872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.486086 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:35 crc kubenswrapper[4860]: E0123 08:15:35.486425 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:35.986418143 +0000 UTC m=+42.614468328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.582171 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" event={"ID":"aa868339-9c74-409d-abba-2950409b3918","Type":"ContainerStarted","Data":"d8b7f86d9e74217bc06ebb009296b0d5aaf90124977fa2f7587963a94aa42576"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.586991 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:35 crc kubenswrapper[4860]: E0123 08:15:35.587129 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.087101896 +0000 UTC m=+42.715152071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.587351 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:35 crc kubenswrapper[4860]: E0123 08:15:35.587685 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.087673661 +0000 UTC m=+42.715723846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.598809 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lfmk9" event={"ID":"cbfc04c3-e456-45d8-9787-3eccb3a8c786","Type":"ContainerStarted","Data":"968e6c603828d604c8d375d2c9706c0265f34dabcd5f222c4ff0f3845f146ecd"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.598855 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lfmk9" event={"ID":"cbfc04c3-e456-45d8-9787-3eccb3a8c786","Type":"ContainerStarted","Data":"20ea91d8fbe7655df02468160157e5b1fa7b979324440b90dda888cc09c191f0"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.601720 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8" event={"ID":"038fa3f0-7095-4011-a6f7-88f53721bdaa","Type":"ContainerStarted","Data":"c6e0336238e4bdf4a06c997dcd876787690a59ad9ba35f5b335f77717a3d6813"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.611545 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" event={"ID":"93b8881a-ddc3-4801-a206-27116eba7c50","Type":"ContainerStarted","Data":"890370a2d196f26dce3337b0b4ff9015ec7a05ac79f705576f48d786b2c9eeae"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.614163 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" event={"ID":"2661d5fe-77bd-44ac-a136-5362fae787f8","Type":"ContainerStarted","Data":"a71de5a69c1c2ca17ed6fa1685b4edb056f2402413f0e9ace1cc4e23df54bb08"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.614953 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.615872 4860 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k9j8h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.615928 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" podUID="2661d5fe-77bd-44ac-a136-5362fae787f8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.626586 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk" event={"ID":"094b44a1-e9bb-4b69-94ad-afa0f9f8179e","Type":"ContainerStarted","Data":"3cd70799f5ecb072786ea4ac9e2efed90ad22199e67283bcba97ab7b43b9f0f0"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.628695 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt" event={"ID":"1cd2d4f0-43da-4722-b619-ba5b470d3d16","Type":"ContainerStarted","Data":"4539bca42213afa07fcc4a7a47bbdc2c927b9b4f6e94bcb30341b435fb0eef96"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.634324 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" event={"ID":"0aacdf8d-27bb-447d-97f4-03e459b5e8e2","Type":"ContainerStarted","Data":"e3341485b15f85d12e595a7c91d1a4dc2b46d5bbcafcc4540c2549ffe4cb2bad"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.635207 4860 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-brdj6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.44:8443/healthz\": dial tcp 10.217.0.44:8443: connect: connection refused" start-of-body= Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.635262 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" podUID="0aacdf8d-27bb-447d-97f4-03e459b5e8e2" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.44:8443/healthz\": dial tcp 10.217.0.44:8443: connect: connection refused" Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.636948 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q7sm9" event={"ID":"d8bc0ea0-46f4-4c4b-b008-394713ae0863","Type":"ContainerStarted","Data":"e1d2da31a12b1b65690d44a1e2b52eed537450bc18f4b522a7655f43e0136435"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.639705 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vtsqg" event={"ID":"3416422c-d1ef-463b-a846-11d6ea9715c3","Type":"ContainerStarted","Data":"d274277c93c54177b337f1d317b9af189a867c032e0c01689c94902d5736104c"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.643068 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" podStartSLOduration=21.643050764 podStartE2EDuration="21.643050764s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:35.642732056 +0000 UTC m=+42.270782241" watchObservedRunningTime="2026-01-23 08:15:35.643050764 +0000 UTC m=+42.271100949" Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.672339 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6" event={"ID":"f5dfda28-7a8d-45a0-81e0-60fe859754bd","Type":"ContainerStarted","Data":"f66ee375aad7d14714f002d77cdaa57c2256dc94540dcfe9b735d0099893d957"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.672378 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6gqph" event={"ID":"8149687b-dd22-4d3d-998c-efd2692db14b","Type":"ContainerStarted","Data":"64e3f352f0177430d27442fadb8f39f9918d1889697891e547dd312ff7b95193"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.676465 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8jdp4" event={"ID":"5d4dcd04-42ad-44e1-99cd-26ca78b8fa4c","Type":"ContainerStarted","Data":"cc849ee1fedb32781f67d16893c27792ca95928cdb45f18f2c9d9124cf60bc3f"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.678287 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-w9rkq" event={"ID":"0817083f-3358-4e5a-ab54-be291b5d20d9","Type":"ContainerStarted","Data":"ce572de9ac764a44d998f0ba772246758787267893b23f63e00524598bd6caac"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.680085 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dr4bl" event={"ID":"b880fa46-0915-4446-b429-94148624a92d","Type":"ContainerStarted","Data":"e72caeea10160a96f5b06a13d1d33ca3638a553837e7d72e9fb43a1bb20a6ee0"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.685356 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" event={"ID":"d43a2987-8474-4e12-bf30-0540d082c40a","Type":"ContainerStarted","Data":"513306333bd769b73d95b0c27ad8ae926b8c514e00c88ae9b99eee6a65cbf7e4"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.688142 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:35 crc kubenswrapper[4860]: E0123 08:15:35.688380 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.188361775 +0000 UTC m=+42.816411960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.688621 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:35 crc kubenswrapper[4860]: E0123 08:15:35.690205 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.190196721 +0000 UTC m=+42.818246906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.722702 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" event={"ID":"aa04c3ab-1fb1-45e2-b236-64364a2112ef","Type":"ContainerStarted","Data":"824a91c7f5ba2904662e039752bab37879c082d5be78b60384e45682bd797d01"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.723420 4860 csr.go:261] certificate signing request csr-jjzcx is approved, waiting to be issued Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.737350 4860 csr.go:257] certificate signing request csr-jjzcx is issued Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.752681 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-dr4bl" podStartSLOduration=8.752662333 podStartE2EDuration="8.752662333s" podCreationTimestamp="2026-01-23 08:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:35.708697016 +0000 UTC m=+42.336747231" watchObservedRunningTime="2026-01-23 08:15:35.752662333 +0000 UTC m=+42.380712518" Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.755298 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq" event={"ID":"a1aed041-7391-4d7e-9ed3-b54294438e6f","Type":"ContainerStarted","Data":"57972706124c9981a9ae09f7db6da2f054afa0d8e631ab7c6ccb330f87e9a1da"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.755898 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq" Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.776829 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" podStartSLOduration=21.77681022 podStartE2EDuration="21.77681022s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:35.75573814 +0000 UTC m=+42.383788325" watchObservedRunningTime="2026-01-23 08:15:35.77681022 +0000 UTC m=+42.404860405" Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.777809 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq" podStartSLOduration=21.777803495 podStartE2EDuration="21.777803495s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:35.776606815 +0000 UTC m=+42.404657000" watchObservedRunningTime="2026-01-23 08:15:35.777803495 +0000 UTC m=+42.405853680" Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.779878 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rgzmr" event={"ID":"c51ed869-7936-40f3-963f-5744bbf20a71","Type":"ContainerStarted","Data":"91a470a9c81e13746323ee138499b7caeafe33b39513244d9bccd72b939342f1"} Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.782496 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-nltcq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.782578 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nltcq" podUID="966345d1-b077-4fab-89b6-ca2830cbe04a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.791087 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:35 crc kubenswrapper[4860]: E0123 08:15:35.792556 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.292521875 +0000 UTC m=+42.920572060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.817467 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6h5g" Jan 23 08:15:35 crc kubenswrapper[4860]: I0123 08:15:35.896199 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:35 crc kubenswrapper[4860]: E0123 08:15:35.896722 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.396711487 +0000 UTC m=+43.024761672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.003852 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.004173 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.504139389 +0000 UTC m=+43.132189574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.004518 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.004917 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.504906749 +0000 UTC m=+43.132956944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.117507 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.117668 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.617642355 +0000 UTC m=+43.245692540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.117804 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.118206 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.618199489 +0000 UTC m=+43.246249674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.219207 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.219656 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.719628881 +0000 UTC m=+43.347679066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.219888 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.220225 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.720208886 +0000 UTC m=+43.348259071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.320762 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.320928 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.820901778 +0000 UTC m=+43.448951963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.321051 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.321385 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.821373021 +0000 UTC m=+43.449423206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.421925 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.422118 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.922091705 +0000 UTC m=+43.550141890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.422219 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.422509 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.922498505 +0000 UTC m=+43.550548690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.441752 4860 patch_prober.go:28] interesting pod/router-default-5444994796-dkzf5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:15:36 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 23 08:15:36 crc kubenswrapper[4860]: [+]process-running ok Jan 23 08:15:36 crc kubenswrapper[4860]: healthz check failed Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.441816 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkzf5" podUID="9d48bd31-b86d-4b22-aa0c-278425b2dbb6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.522863 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.523099 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:37.023072796 +0000 UTC m=+43.651122981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.523255 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.523545 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:37.023531937 +0000 UTC m=+43.651582122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.624187 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.624397 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:37.124366414 +0000 UTC m=+43.752416609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.624742 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.625075 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:37.125066292 +0000 UTC m=+43.753116477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.725237 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.725419 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:37.225384235 +0000 UTC m=+43.853434420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.725521 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.725917 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:37.225900489 +0000 UTC m=+43.853950674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.739179 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-23 08:10:35 +0000 UTC, rotation deadline is 2026-11-10 22:24:56.048614466 +0000 UTC Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.739224 4860 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6998h9m19.309393623s for next certificate rotation Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.788145 4860 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ld8cj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.788220 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" podUID="515bd73b-7a1b-414c-b189-25042a6049a3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.20:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.810222 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" event={"ID":"3615b8d9-7529-46e3-9320-1e8a70ced9a5","Type":"ContainerStarted","Data":"a26a6cba07b17a4a2cac909d2d8c2f3eade65054cf0310af8827b3ae6cd35edd"} Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.810303 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" event={"ID":"3615b8d9-7529-46e3-9320-1e8a70ced9a5","Type":"ContainerStarted","Data":"5954111b191ad0955c4a66aea2753f4cb3a06e9bc5ee684ba431b94c71719178"} Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.826240 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.826432 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:37.326403448 +0000 UTC m=+43.954453673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.827829 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" event={"ID":"aa04c3ab-1fb1-45e2-b236-64364a2112ef","Type":"ContainerStarted","Data":"f591ebb883fd708c362af7ae62c34fb4509d9e8042eb36e399aba498f831b304"} Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.858138 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" podStartSLOduration=22.858121175 podStartE2EDuration="22.858121175s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:36.855313214 +0000 UTC m=+43.483363409" watchObservedRunningTime="2026-01-23 08:15:36.858121175 +0000 UTC m=+43.486171360" Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.884461 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8jdp4" event={"ID":"5d4dcd04-42ad-44e1-99cd-26ca78b8fa4c","Type":"ContainerStarted","Data":"be7d6bdbd0b4f196e6274d18b1f776a75d9bf66f7d8229f9cb3087ef789fd7b7"} Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.911611 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8ctp5" podStartSLOduration=22.91159264 podStartE2EDuration="22.91159264s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:36.90957757 +0000 UTC m=+43.537627755" watchObservedRunningTime="2026-01-23 08:15:36.91159264 +0000 UTC m=+43.539642825" Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.920005 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rgzmr" event={"ID":"c51ed869-7936-40f3-963f-5744bbf20a71","Type":"ContainerStarted","Data":"5e878a504004ca96cc0edcbaa57743301cbd8e50b921c48edc392e42ec030570"} Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.929979 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:36 crc kubenswrapper[4860]: E0123 08:15:36.930440 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:37.430426334 +0000 UTC m=+44.058476520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.946263 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" event={"ID":"89b489d0-ffe2-418b-a4f8-4e693bb9cf63","Type":"ContainerStarted","Data":"7d39a37db01704729965d0978dea9e33263cb6eb597d73d200b70a155df035aa"} Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.968822 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vtsqg" event={"ID":"3416422c-d1ef-463b-a846-11d6ea9715c3","Type":"ContainerStarted","Data":"f4da1b1ea82e6a61bc026f82bbbc5edfadc1f3e4776c6e9f6c6efd26dba9d269"} Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.968871 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vtsqg" event={"ID":"3416422c-d1ef-463b-a846-11d6ea9715c3","Type":"ContainerStarted","Data":"9f679b33510ecfa2e771144f3c4591bf350356a969de952433bd99bdc0ea3518"} Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.968887 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.968903 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-w9rkq" Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.968839 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" podUID="fef7b17c-47aa-41ef-83b0-753aba1cac55" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" gracePeriod=30 Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.981786 4860 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k9j8h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 23 08:15:36 crc kubenswrapper[4860]: I0123 08:15:36.981842 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" podUID="2661d5fe-77bd-44ac-a136-5362fae787f8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.004400 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-brdj6" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.033487 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-8jdp4" podStartSLOduration=23.033471436 podStartE2EDuration="23.033471436s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:36.981455838 +0000 UTC m=+43.609506023" watchObservedRunningTime="2026-01-23 08:15:37.033471436 +0000 UTC m=+43.661521621" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.034016 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-vtsqg" podStartSLOduration=23.034006761 podStartE2EDuration="23.034006761s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:37.030942534 +0000 UTC m=+43.658992719" watchObservedRunningTime="2026-01-23 08:15:37.034006761 +0000 UTC m=+43.662056946" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.042046 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:37 crc kubenswrapper[4860]: E0123 08:15:37.043592 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:37.543578202 +0000 UTC m=+44.171628377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.045144 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:37 crc kubenswrapper[4860]: E0123 08:15:37.059748 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:37.559733177 +0000 UTC m=+44.187783362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.075944 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ptbvt" podStartSLOduration=23.075928155 podStartE2EDuration="23.075928155s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:37.073392331 +0000 UTC m=+43.701442516" watchObservedRunningTime="2026-01-23 08:15:37.075928155 +0000 UTC m=+43.703978340" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.121963 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9kmm4" podStartSLOduration=23.121942743 podStartE2EDuration="23.121942743s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:37.111688104 +0000 UTC m=+43.739738289" watchObservedRunningTime="2026-01-23 08:15:37.121942743 +0000 UTC m=+43.749992928" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.123095 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rgzmr" podStartSLOduration=23.123086791 podStartE2EDuration="23.123086791s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:37.093788964 +0000 UTC m=+43.721839149" watchObservedRunningTime="2026-01-23 08:15:37.123086791 +0000 UTC m=+43.751136976" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.148282 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" podStartSLOduration=23.148265335 podStartE2EDuration="23.148265335s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:37.146399088 +0000 UTC m=+43.774449273" watchObservedRunningTime="2026-01-23 08:15:37.148265335 +0000 UTC m=+43.776315520" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.150610 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:37 crc kubenswrapper[4860]: E0123 08:15:37.151340 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:37.651323282 +0000 UTC m=+44.279373467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.167077 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6gqph" podStartSLOduration=23.167062968 podStartE2EDuration="23.167062968s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:37.166461622 +0000 UTC m=+43.794511807" watchObservedRunningTime="2026-01-23 08:15:37.167062968 +0000 UTC m=+43.795113153" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.219375 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lfmk9" podStartSLOduration=23.219356544 podStartE2EDuration="23.219356544s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:37.216480481 +0000 UTC m=+43.844530666" watchObservedRunningTime="2026-01-23 08:15:37.219356544 +0000 UTC m=+43.847406729" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.243910 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-v9sjk" podStartSLOduration=23.243894691 podStartE2EDuration="23.243894691s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:37.242441335 +0000 UTC m=+43.870491520" watchObservedRunningTime="2026-01-23 08:15:37.243894691 +0000 UTC m=+43.871944866" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.252535 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:37 crc kubenswrapper[4860]: E0123 08:15:37.253055 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:37.753040101 +0000 UTC m=+44.381090306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.304520 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hb9c8" podStartSLOduration=23.304503716 podStartE2EDuration="23.304503716s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:37.30187755 +0000 UTC m=+43.929927735" watchObservedRunningTime="2026-01-23 08:15:37.304503716 +0000 UTC m=+43.932553901" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.305298 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-w9rkq" podStartSLOduration=10.305290576 podStartE2EDuration="10.305290576s" podCreationTimestamp="2026-01-23 08:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:37.279463166 +0000 UTC m=+43.907513351" watchObservedRunningTime="2026-01-23 08:15:37.305290576 +0000 UTC m=+43.933340771" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.326510 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9r4z6" podStartSLOduration=23.32649244 podStartE2EDuration="23.32649244s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:37.324000587 +0000 UTC m=+43.952050772" watchObservedRunningTime="2026-01-23 08:15:37.32649244 +0000 UTC m=+43.954542625" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.353384 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:37 crc kubenswrapper[4860]: E0123 08:15:37.353533 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:37.853509259 +0000 UTC m=+44.481559444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.353740 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:37 crc kubenswrapper[4860]: E0123 08:15:37.354009 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:37.854002231 +0000 UTC m=+44.482052416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.445159 4860 patch_prober.go:28] interesting pod/router-default-5444994796-dkzf5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:15:37 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 23 08:15:37 crc kubenswrapper[4860]: [+]process-running ok Jan 23 08:15:37 crc kubenswrapper[4860]: healthz check failed Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.445537 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkzf5" podUID="9d48bd31-b86d-4b22-aa0c-278425b2dbb6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.455083 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:37 crc kubenswrapper[4860]: E0123 08:15:37.455187 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:37.955173617 +0000 UTC m=+44.583223792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.455281 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:37 crc kubenswrapper[4860]: E0123 08:15:37.455578 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:37.955568967 +0000 UTC m=+44.583619152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.555817 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:37 crc kubenswrapper[4860]: E0123 08:15:37.556100 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:38.056082446 +0000 UTC m=+44.684132621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.580072 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-44v8b"] Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.581439 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.583688 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.594466 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-44v8b"] Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.604549 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.604615 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.607685 4860 patch_prober.go:28] interesting pod/console-f9d7485db-vtj8m container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.607875 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vtj8m" podUID="72589342-3cb8-4e43-bad0-f1726f70d77a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.649678 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-nltcq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.649720 4860 patch_prober.go:28] interesting pod/downloads-7954f5f757-nltcq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.649732 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nltcq" podUID="966345d1-b077-4fab-89b6-ca2830cbe04a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.649767 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-nltcq" podUID="966345d1-b077-4fab-89b6-ca2830cbe04a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.656790 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-catalog-content\") pod \"community-operators-44v8b\" (UID: \"f2bd04fa-7e4e-4a22-9b93-418f22e296b2\") " pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.656849 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-utilities\") pod \"community-operators-44v8b\" (UID: \"f2bd04fa-7e4e-4a22-9b93-418f22e296b2\") " pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.656904 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.656951 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2g29\" (UniqueName: \"kubernetes.io/projected/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-kube-api-access-h2g29\") pod \"community-operators-44v8b\" (UID: \"f2bd04fa-7e4e-4a22-9b93-418f22e296b2\") " pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:15:37 crc kubenswrapper[4860]: E0123 08:15:37.658081 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:38.158070101 +0000 UTC m=+44.786120286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.757747 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:37 crc kubenswrapper[4860]: E0123 08:15:37.757930 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:38.257897093 +0000 UTC m=+44.885947278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.758330 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-utilities\") pod \"community-operators-44v8b\" (UID: \"f2bd04fa-7e4e-4a22-9b93-418f22e296b2\") " pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.758510 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.758685 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2g29\" (UniqueName: \"kubernetes.io/projected/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-kube-api-access-h2g29\") pod \"community-operators-44v8b\" (UID: \"f2bd04fa-7e4e-4a22-9b93-418f22e296b2\") " pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.758789 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-utilities\") pod \"community-operators-44v8b\" (UID: \"f2bd04fa-7e4e-4a22-9b93-418f22e296b2\") " pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:15:37 crc kubenswrapper[4860]: E0123 08:15:37.758896 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:38.258880067 +0000 UTC m=+44.886930252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.759069 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-catalog-content\") pod \"community-operators-44v8b\" (UID: \"f2bd04fa-7e4e-4a22-9b93-418f22e296b2\") " pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.759424 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-catalog-content\") pod \"community-operators-44v8b\" (UID: \"f2bd04fa-7e4e-4a22-9b93-418f22e296b2\") " pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.772519 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dws9d"] Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.773412 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.775346 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.782042 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2g29\" (UniqueName: \"kubernetes.io/projected/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-kube-api-access-h2g29\") pod \"community-operators-44v8b\" (UID: \"f2bd04fa-7e4e-4a22-9b93-418f22e296b2\") " pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.788649 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dws9d"] Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.824704 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.824751 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.831654 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.838857 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.838925 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.860098 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:37 crc kubenswrapper[4860]: E0123 08:15:37.860446 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:38.360419543 +0000 UTC m=+44.988469758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.860854 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-catalog-content\") pod \"certified-operators-dws9d\" (UID: \"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5\") " pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.861461 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-utilities\") pod \"certified-operators-dws9d\" (UID: \"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5\") " pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.861555 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwz2t\" (UniqueName: \"kubernetes.io/projected/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-kube-api-access-pwz2t\") pod \"certified-operators-dws9d\" (UID: \"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5\") " pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.899655 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.964837 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-utilities\") pod \"certified-operators-dws9d\" (UID: \"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5\") " pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.965201 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwz2t\" (UniqueName: \"kubernetes.io/projected/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-kube-api-access-pwz2t\") pod \"certified-operators-dws9d\" (UID: \"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5\") " pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.965282 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.965336 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-catalog-content\") pod \"certified-operators-dws9d\" (UID: \"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5\") " pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.965803 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-catalog-content\") pod \"certified-operators-dws9d\" (UID: \"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5\") " pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:15:37 crc kubenswrapper[4860]: E0123 08:15:37.966129 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:38.466114821 +0000 UTC m=+45.094165006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.966139 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-utilities\") pod \"certified-operators-dws9d\" (UID: \"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5\") " pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.970805 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zrqw5"] Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.973863 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.987094 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-c6bwf" Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.987488 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zrqw5"] Jan 23 08:15:37 crc kubenswrapper[4860]: I0123 08:15:37.989321 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwz2t\" (UniqueName: \"kubernetes.io/projected/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-kube-api-access-pwz2t\") pod \"certified-operators-dws9d\" (UID: \"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5\") " pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.066864 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:38 crc kubenswrapper[4860]: E0123 08:15:38.067088 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:38.567056592 +0000 UTC m=+45.195106777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.069617 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3742d9-24a7-4b15-9301-8d03596ae37b-utilities\") pod \"community-operators-zrqw5\" (UID: \"ce3742d9-24a7-4b15-9301-8d03596ae37b\") " pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.069693 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2gvf\" (UniqueName: \"kubernetes.io/projected/ce3742d9-24a7-4b15-9301-8d03596ae37b-kube-api-access-q2gvf\") pod \"community-operators-zrqw5\" (UID: \"ce3742d9-24a7-4b15-9301-8d03596ae37b\") " pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.069715 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3742d9-24a7-4b15-9301-8d03596ae37b-catalog-content\") pod \"community-operators-zrqw5\" (UID: \"ce3742d9-24a7-4b15-9301-8d03596ae37b\") " pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.069749 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:38 crc kubenswrapper[4860]: E0123 08:15:38.070127 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:38.570118739 +0000 UTC m=+45.198168924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.105372 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.112646 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-44v8b"] Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.173682 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.174182 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3742d9-24a7-4b15-9301-8d03596ae37b-utilities\") pod \"community-operators-zrqw5\" (UID: \"ce3742d9-24a7-4b15-9301-8d03596ae37b\") " pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.174230 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2gvf\" (UniqueName: \"kubernetes.io/projected/ce3742d9-24a7-4b15-9301-8d03596ae37b-kube-api-access-q2gvf\") pod \"community-operators-zrqw5\" (UID: \"ce3742d9-24a7-4b15-9301-8d03596ae37b\") " pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.174257 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3742d9-24a7-4b15-9301-8d03596ae37b-catalog-content\") pod \"community-operators-zrqw5\" (UID: \"ce3742d9-24a7-4b15-9301-8d03596ae37b\") " pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.174851 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3742d9-24a7-4b15-9301-8d03596ae37b-catalog-content\") pod \"community-operators-zrqw5\" (UID: \"ce3742d9-24a7-4b15-9301-8d03596ae37b\") " pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:15:38 crc kubenswrapper[4860]: E0123 08:15:38.174953 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:38.674932636 +0000 UTC m=+45.302982821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.175255 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3742d9-24a7-4b15-9301-8d03596ae37b-utilities\") pod \"community-operators-zrqw5\" (UID: \"ce3742d9-24a7-4b15-9301-8d03596ae37b\") " pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.183710 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cs456"] Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.184917 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.198174 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2gvf\" (UniqueName: \"kubernetes.io/projected/ce3742d9-24a7-4b15-9301-8d03596ae37b-kube-api-access-q2gvf\") pod \"community-operators-zrqw5\" (UID: \"ce3742d9-24a7-4b15-9301-8d03596ae37b\") " pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.206936 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cs456"] Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.275527 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.275879 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-utilities\") pod \"certified-operators-cs456\" (UID: \"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b\") " pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.275912 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5tz8\" (UniqueName: \"kubernetes.io/projected/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-kube-api-access-h5tz8\") pod \"certified-operators-cs456\" (UID: \"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b\") " pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.275965 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-catalog-content\") pod \"certified-operators-cs456\" (UID: \"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b\") " pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:15:38 crc kubenswrapper[4860]: E0123 08:15:38.276367 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:38.776353877 +0000 UTC m=+45.404404062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.309522 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.373675 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dws9d"] Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.377574 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.377849 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-utilities\") pod \"certified-operators-cs456\" (UID: \"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b\") " pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.377884 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5tz8\" (UniqueName: \"kubernetes.io/projected/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-kube-api-access-h5tz8\") pod \"certified-operators-cs456\" (UID: \"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b\") " pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.377943 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-catalog-content\") pod \"certified-operators-cs456\" (UID: \"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b\") " pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.378452 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-catalog-content\") pod \"certified-operators-cs456\" (UID: \"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b\") " pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.378719 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-utilities\") pod \"certified-operators-cs456\" (UID: \"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b\") " pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:15:38 crc kubenswrapper[4860]: E0123 08:15:38.378812 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:38.878794634 +0000 UTC m=+45.506844809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.409696 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5tz8\" (UniqueName: \"kubernetes.io/projected/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-kube-api-access-h5tz8\") pod \"certified-operators-cs456\" (UID: \"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b\") " pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.447736 4860 patch_prober.go:28] interesting pod/router-default-5444994796-dkzf5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:15:38 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 23 08:15:38 crc kubenswrapper[4860]: [+]process-running ok Jan 23 08:15:38 crc kubenswrapper[4860]: healthz check failed Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.447795 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkzf5" podUID="9d48bd31-b86d-4b22-aa0c-278425b2dbb6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.479997 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:38 crc kubenswrapper[4860]: E0123 08:15:38.480354 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:38.98034164 +0000 UTC m=+45.608391825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.518937 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.580781 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:38 crc kubenswrapper[4860]: E0123 08:15:38.581428 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:39.081391672 +0000 UTC m=+45.709441857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.682843 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:38 crc kubenswrapper[4860]: E0123 08:15:38.683450 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:39.18343816 +0000 UTC m=+45.811488345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.700157 4860 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.731726 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zrqw5"] Jan 23 08:15:38 crc kubenswrapper[4860]: W0123 08:15:38.741902 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce3742d9_24a7_4b15_9301_8d03596ae37b.slice/crio-5a1b2483986f272e4218a1250c1163cd19ef321167579e4beab2a3ddda80e70a WatchSource:0}: Error finding container 5a1b2483986f272e4218a1250c1163cd19ef321167579e4beab2a3ddda80e70a: Status 404 returned error can't find the container with id 5a1b2483986f272e4218a1250c1163cd19ef321167579e4beab2a3ddda80e70a Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.784395 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:38 crc kubenswrapper[4860]: E0123 08:15:38.784597 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:39.284570694 +0000 UTC m=+45.912620879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.784783 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:38 crc kubenswrapper[4860]: E0123 08:15:38.785180 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:39.285164968 +0000 UTC m=+45.913215153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.874697 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cs456"] Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.886579 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:38 crc kubenswrapper[4860]: E0123 08:15:38.886745 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:39.386714244 +0000 UTC m=+46.014764439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.886921 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:38 crc kubenswrapper[4860]: E0123 08:15:38.887347 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:39.387336979 +0000 UTC m=+46.015387244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:38 crc kubenswrapper[4860]: W0123 08:15:38.891401 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94cbcca6_2f65_40ab_ab9f_37ac19db1f4b.slice/crio-9b549e5464cfacbda0374f9688b7022c712b7d2b48a69fed62565bca66f936ca WatchSource:0}: Error finding container 9b549e5464cfacbda0374f9688b7022c712b7d2b48a69fed62565bca66f936ca: Status 404 returned error can't find the container with id 9b549e5464cfacbda0374f9688b7022c712b7d2b48a69fed62565bca66f936ca Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.987904 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:38 crc kubenswrapper[4860]: E0123 08:15:38.988127 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:39.488093034 +0000 UTC m=+46.116143229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.988285 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:38 crc kubenswrapper[4860]: E0123 08:15:38.988616 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:15:39.488601737 +0000 UTC m=+46.116651932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fsxq7" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.994153 4860 generic.go:334] "Generic (PLEG): container finished" podID="f2bd04fa-7e4e-4a22-9b93-418f22e296b2" containerID="c99c1540542ed48ce31c24069544813082555a3724400e7e58a46608ffbdc127" exitCode=0 Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.994263 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44v8b" event={"ID":"f2bd04fa-7e4e-4a22-9b93-418f22e296b2","Type":"ContainerDied","Data":"c99c1540542ed48ce31c24069544813082555a3724400e7e58a46608ffbdc127"} Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.994315 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44v8b" event={"ID":"f2bd04fa-7e4e-4a22-9b93-418f22e296b2","Type":"ContainerStarted","Data":"55fac23b6701291d1d4d76cd940257734df46e41e8a05450c2edf1bfc94f8cdd"} Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.996476 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.996876 4860 generic.go:334] "Generic (PLEG): container finished" podID="d43a2987-8474-4e12-bf30-0540d082c40a" containerID="513306333bd769b73d95b0c27ad8ae926b8c514e00c88ae9b99eee6a65cbf7e4" exitCode=0 Jan 23 08:15:38 crc kubenswrapper[4860]: I0123 08:15:38.996940 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" event={"ID":"d43a2987-8474-4e12-bf30-0540d082c40a","Type":"ContainerDied","Data":"513306333bd769b73d95b0c27ad8ae926b8c514e00c88ae9b99eee6a65cbf7e4"} Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.000064 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dws9d" event={"ID":"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5","Type":"ContainerStarted","Data":"ba5a64ba5547861c6913ff7a7b780f463247e6c35ecd28b0ef9dbbfadd6b3f9a"} Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.000125 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dws9d" event={"ID":"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5","Type":"ContainerStarted","Data":"1bb4ba557fdb359599fc2b5b2f359240d5ba6808086b8236a261e8e7163a88d9"} Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.006467 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqw5" event={"ID":"ce3742d9-24a7-4b15-9301-8d03596ae37b","Type":"ContainerStarted","Data":"5a1b2483986f272e4218a1250c1163cd19ef321167579e4beab2a3ddda80e70a"} Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.008959 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cs456" event={"ID":"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b","Type":"ContainerStarted","Data":"9b549e5464cfacbda0374f9688b7022c712b7d2b48a69fed62565bca66f936ca"} Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.026300 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" event={"ID":"89b489d0-ffe2-418b-a4f8-4e693bb9cf63","Type":"ContainerStarted","Data":"16cd173217b2e3d4d9475318047c2b2d6706cd91882fea4196ac8ef551c5acb3"} Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.026333 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" event={"ID":"89b489d0-ffe2-418b-a4f8-4e693bb9cf63","Type":"ContainerStarted","Data":"ac5f86d3c0b6970dbbf9a4a92c1f3e964184b4b4e8ae9f1240d66c1d01afb1d9"} Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.089507 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:39 crc kubenswrapper[4860]: E0123 08:15:39.090907 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 08:15:39.590891091 +0000 UTC m=+46.218941276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.106914 4860 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-23T08:15:38.700186491Z","Handler":null,"Name":""} Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.120347 4860 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.120391 4860 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.192097 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.194800 4860 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.194853 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.220811 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fsxq7\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.270808 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.293560 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.299636 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.402564 4860 patch_prober.go:28] interesting pod/apiserver-76f77b778f-fzzcx container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 08:15:39 crc kubenswrapper[4860]: [+]log ok Jan 23 08:15:39 crc kubenswrapper[4860]: [+]etcd ok Jan 23 08:15:39 crc kubenswrapper[4860]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 23 08:15:39 crc kubenswrapper[4860]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 08:15:39 crc kubenswrapper[4860]: [+]poststarthook/max-in-flight-filter ok Jan 23 08:15:39 crc kubenswrapper[4860]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 08:15:39 crc kubenswrapper[4860]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 23 08:15:39 crc kubenswrapper[4860]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 23 08:15:39 crc kubenswrapper[4860]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 23 08:15:39 crc kubenswrapper[4860]: [+]poststarthook/project.openshift.io-projectcache ok Jan 23 08:15:39 crc kubenswrapper[4860]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 23 08:15:39 crc kubenswrapper[4860]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Jan 23 08:15:39 crc kubenswrapper[4860]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 23 08:15:39 crc kubenswrapper[4860]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 08:15:39 crc kubenswrapper[4860]: livez check failed Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.402623 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" podUID="3615b8d9-7529-46e3-9320-1e8a70ced9a5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.444870 4860 patch_prober.go:28] interesting pod/router-default-5444994796-dkzf5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:15:39 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 23 08:15:39 crc kubenswrapper[4860]: [+]process-running ok Jan 23 08:15:39 crc kubenswrapper[4860]: healthz check failed Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.444940 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkzf5" podUID="9d48bd31-b86d-4b22-aa0c-278425b2dbb6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.478957 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fsxq7"] Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.666180 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.696823 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.697625 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.700059 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.700171 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.705365 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.773265 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vlqk5"] Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.774575 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.776832 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.792971 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vlqk5"] Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.801406 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-catalog-content\") pod \"redhat-marketplace-vlqk5\" (UID: \"6938dfcf-ed32-45c3-a2a6-04f0b60315ac\") " pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.801474 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-utilities\") pod \"redhat-marketplace-vlqk5\" (UID: \"6938dfcf-ed32-45c3-a2a6-04f0b60315ac\") " pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.801535 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcpfz\" (UniqueName: \"kubernetes.io/projected/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-kube-api-access-xcpfz\") pod \"redhat-marketplace-vlqk5\" (UID: \"6938dfcf-ed32-45c3-a2a6-04f0b60315ac\") " pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.801573 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ac471c4-a681-452a-bcb5-5b1ca1b43fbb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3ac471c4-a681-452a-bcb5-5b1ca1b43fbb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.801596 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ac471c4-a681-452a-bcb5-5b1ca1b43fbb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3ac471c4-a681-452a-bcb5-5b1ca1b43fbb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.875150 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6w4d9" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.902712 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcpfz\" (UniqueName: \"kubernetes.io/projected/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-kube-api-access-xcpfz\") pod \"redhat-marketplace-vlqk5\" (UID: \"6938dfcf-ed32-45c3-a2a6-04f0b60315ac\") " pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.902808 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ac471c4-a681-452a-bcb5-5b1ca1b43fbb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3ac471c4-a681-452a-bcb5-5b1ca1b43fbb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.902833 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ac471c4-a681-452a-bcb5-5b1ca1b43fbb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3ac471c4-a681-452a-bcb5-5b1ca1b43fbb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.902908 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-catalog-content\") pod \"redhat-marketplace-vlqk5\" (UID: \"6938dfcf-ed32-45c3-a2a6-04f0b60315ac\") " pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.902954 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-utilities\") pod \"redhat-marketplace-vlqk5\" (UID: \"6938dfcf-ed32-45c3-a2a6-04f0b60315ac\") " pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.903529 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-utilities\") pod \"redhat-marketplace-vlqk5\" (UID: \"6938dfcf-ed32-45c3-a2a6-04f0b60315ac\") " pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.903816 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ac471c4-a681-452a-bcb5-5b1ca1b43fbb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3ac471c4-a681-452a-bcb5-5b1ca1b43fbb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.904274 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-catalog-content\") pod \"redhat-marketplace-vlqk5\" (UID: \"6938dfcf-ed32-45c3-a2a6-04f0b60315ac\") " pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.976555 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcpfz\" (UniqueName: \"kubernetes.io/projected/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-kube-api-access-xcpfz\") pod \"redhat-marketplace-vlqk5\" (UID: \"6938dfcf-ed32-45c3-a2a6-04f0b60315ac\") " pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:15:39 crc kubenswrapper[4860]: I0123 08:15:39.986674 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ac471c4-a681-452a-bcb5-5b1ca1b43fbb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3ac471c4-a681-452a-bcb5-5b1ca1b43fbb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.012229 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.059351 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" event={"ID":"89b489d0-ffe2-418b-a4f8-4e693bb9cf63","Type":"ContainerStarted","Data":"9c394a8132901c4ed9736572192db1b8baa6edb2a745a1324caf589601942fac"} Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.086752 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-kc8q5" podStartSLOduration=13.086738616 podStartE2EDuration="13.086738616s" podCreationTimestamp="2026-01-23 08:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:40.086050879 +0000 UTC m=+46.714101084" watchObservedRunningTime="2026-01-23 08:15:40.086738616 +0000 UTC m=+46.714788801" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.087285 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.087439 4860 generic.go:334] "Generic (PLEG): container finished" podID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" containerID="ba5a64ba5547861c6913ff7a7b780f463247e6c35ecd28b0ef9dbbfadd6b3f9a" exitCode=0 Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.087668 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dws9d" event={"ID":"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5","Type":"ContainerDied","Data":"ba5a64ba5547861c6913ff7a7b780f463247e6c35ecd28b0ef9dbbfadd6b3f9a"} Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.095582 4860 generic.go:334] "Generic (PLEG): container finished" podID="ce3742d9-24a7-4b15-9301-8d03596ae37b" containerID="9ff0eded88ec6381d008d0833991054ad5ae0df69aa4e9975f0f7d47c545441e" exitCode=0 Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.095670 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqw5" event={"ID":"ce3742d9-24a7-4b15-9301-8d03596ae37b","Type":"ContainerDied","Data":"9ff0eded88ec6381d008d0833991054ad5ae0df69aa4e9975f0f7d47c545441e"} Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.105869 4860 generic.go:334] "Generic (PLEG): container finished" podID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" containerID="ecfda0ef9c96c72d8803eda616b9d88144f7b386325a40a771fa7e6fcb212187" exitCode=0 Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.105955 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cs456" event={"ID":"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b","Type":"ContainerDied","Data":"ecfda0ef9c96c72d8803eda616b9d88144f7b386325a40a771fa7e6fcb212187"} Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.110225 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" event={"ID":"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438","Type":"ContainerStarted","Data":"71cd2ce37f85fad3ba55ecc4402fbe04d024f5e62e91d4c912f376d1e791398f"} Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.110254 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" event={"ID":"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438","Type":"ContainerStarted","Data":"3f1857537940c37c15e973f555290eadd3e33e2084d8770e6146b6cb4ccc9d8a"} Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.179666 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" podStartSLOduration=26.179645433 podStartE2EDuration="26.179645433s" podCreationTimestamp="2026-01-23 08:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:40.161158398 +0000 UTC m=+46.789208593" watchObservedRunningTime="2026-01-23 08:15:40.179645433 +0000 UTC m=+46.807695618" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.185212 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wqsdd"] Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.188371 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.199668 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wqsdd"] Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.208096 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-utilities\") pod \"redhat-marketplace-wqsdd\" (UID: \"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b\") " pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.208132 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-catalog-content\") pod \"redhat-marketplace-wqsdd\" (UID: \"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b\") " pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.208269 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x42l2\" (UniqueName: \"kubernetes.io/projected/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-kube-api-access-x42l2\") pod \"redhat-marketplace-wqsdd\" (UID: \"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b\") " pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.310614 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x42l2\" (UniqueName: \"kubernetes.io/projected/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-kube-api-access-x42l2\") pod \"redhat-marketplace-wqsdd\" (UID: \"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b\") " pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.310727 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-utilities\") pod \"redhat-marketplace-wqsdd\" (UID: \"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b\") " pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.310787 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-catalog-content\") pod \"redhat-marketplace-wqsdd\" (UID: \"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b\") " pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.312594 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-catalog-content\") pod \"redhat-marketplace-wqsdd\" (UID: \"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b\") " pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.312885 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-utilities\") pod \"redhat-marketplace-wqsdd\" (UID: \"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b\") " pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.338463 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x42l2\" (UniqueName: \"kubernetes.io/projected/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-kube-api-access-x42l2\") pod \"redhat-marketplace-wqsdd\" (UID: \"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b\") " pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.429495 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.442793 4860 patch_prober.go:28] interesting pod/router-default-5444994796-dkzf5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:15:40 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 23 08:15:40 crc kubenswrapper[4860]: [+]process-running ok Jan 23 08:15:40 crc kubenswrapper[4860]: healthz check failed Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.442862 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkzf5" podUID="9d48bd31-b86d-4b22-aa0c-278425b2dbb6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.476999 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vlqk5"] Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.512737 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.519963 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.615001 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d43a2987-8474-4e12-bf30-0540d082c40a-config-volume\") pod \"d43a2987-8474-4e12-bf30-0540d082c40a\" (UID: \"d43a2987-8474-4e12-bf30-0540d082c40a\") " Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.615086 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d43a2987-8474-4e12-bf30-0540d082c40a-secret-volume\") pod \"d43a2987-8474-4e12-bf30-0540d082c40a\" (UID: \"d43a2987-8474-4e12-bf30-0540d082c40a\") " Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.615127 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4jf6\" (UniqueName: \"kubernetes.io/projected/d43a2987-8474-4e12-bf30-0540d082c40a-kube-api-access-m4jf6\") pod \"d43a2987-8474-4e12-bf30-0540d082c40a\" (UID: \"d43a2987-8474-4e12-bf30-0540d082c40a\") " Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.621112 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d43a2987-8474-4e12-bf30-0540d082c40a-config-volume" (OuterVolumeSpecName: "config-volume") pod "d43a2987-8474-4e12-bf30-0540d082c40a" (UID: "d43a2987-8474-4e12-bf30-0540d082c40a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.622412 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d43a2987-8474-4e12-bf30-0540d082c40a-kube-api-access-m4jf6" (OuterVolumeSpecName: "kube-api-access-m4jf6") pod "d43a2987-8474-4e12-bf30-0540d082c40a" (UID: "d43a2987-8474-4e12-bf30-0540d082c40a"). InnerVolumeSpecName "kube-api-access-m4jf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.634717 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d43a2987-8474-4e12-bf30-0540d082c40a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d43a2987-8474-4e12-bf30-0540d082c40a" (UID: "d43a2987-8474-4e12-bf30-0540d082c40a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.718829 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d43a2987-8474-4e12-bf30-0540d082c40a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.718879 4860 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d43a2987-8474-4e12-bf30-0540d082c40a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.718892 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4jf6\" (UniqueName: \"kubernetes.io/projected/d43a2987-8474-4e12-bf30-0540d082c40a-kube-api-access-m4jf6\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.780402 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wqsdd"] Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.785874 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2x4xz"] Jan 23 08:15:40 crc kubenswrapper[4860]: E0123 08:15:40.786341 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d43a2987-8474-4e12-bf30-0540d082c40a" containerName="collect-profiles" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.791340 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d43a2987-8474-4e12-bf30-0540d082c40a" containerName="collect-profiles" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.791638 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d43a2987-8474-4e12-bf30-0540d082c40a" containerName="collect-profiles" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.792932 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2x4xz"] Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.793177 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:15:40 crc kubenswrapper[4860]: W0123 08:15:40.800946 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92e9a494_d0d1_4bb9_8cc9_86e044e7a75b.slice/crio-c2a038b0564fbe984aebb60d971a2190c42f410f8e1e00fe80b1a7d7dd556d09 WatchSource:0}: Error finding container c2a038b0564fbe984aebb60d971a2190c42f410f8e1e00fe80b1a7d7dd556d09: Status 404 returned error can't find the container with id c2a038b0564fbe984aebb60d971a2190c42f410f8e1e00fe80b1a7d7dd556d09 Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.801159 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.883547 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ld8cj" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.922142 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d8c162-6bfc-447f-b688-6f4e74687cd8-catalog-content\") pod \"redhat-operators-2x4xz\" (UID: \"13d8c162-6bfc-447f-b688-6f4e74687cd8\") " pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.922441 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc69p\" (UniqueName: \"kubernetes.io/projected/13d8c162-6bfc-447f-b688-6f4e74687cd8-kube-api-access-bc69p\") pod \"redhat-operators-2x4xz\" (UID: \"13d8c162-6bfc-447f-b688-6f4e74687cd8\") " pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:15:40 crc kubenswrapper[4860]: I0123 08:15:40.922506 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d8c162-6bfc-447f-b688-6f4e74687cd8-utilities\") pod \"redhat-operators-2x4xz\" (UID: \"13d8c162-6bfc-447f-b688-6f4e74687cd8\") " pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.025559 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d8c162-6bfc-447f-b688-6f4e74687cd8-catalog-content\") pod \"redhat-operators-2x4xz\" (UID: \"13d8c162-6bfc-447f-b688-6f4e74687cd8\") " pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.025955 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc69p\" (UniqueName: \"kubernetes.io/projected/13d8c162-6bfc-447f-b688-6f4e74687cd8-kube-api-access-bc69p\") pod \"redhat-operators-2x4xz\" (UID: \"13d8c162-6bfc-447f-b688-6f4e74687cd8\") " pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.026003 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d8c162-6bfc-447f-b688-6f4e74687cd8-utilities\") pod \"redhat-operators-2x4xz\" (UID: \"13d8c162-6bfc-447f-b688-6f4e74687cd8\") " pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.026315 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d8c162-6bfc-447f-b688-6f4e74687cd8-catalog-content\") pod \"redhat-operators-2x4xz\" (UID: \"13d8c162-6bfc-447f-b688-6f4e74687cd8\") " pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.026465 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d8c162-6bfc-447f-b688-6f4e74687cd8-utilities\") pod \"redhat-operators-2x4xz\" (UID: \"13d8c162-6bfc-447f-b688-6f4e74687cd8\") " pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.049893 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc69p\" (UniqueName: \"kubernetes.io/projected/13d8c162-6bfc-447f-b688-6f4e74687cd8-kube-api-access-bc69p\") pod \"redhat-operators-2x4xz\" (UID: \"13d8c162-6bfc-447f-b688-6f4e74687cd8\") " pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.140360 4860 generic.go:334] "Generic (PLEG): container finished" podID="92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" containerID="79e583ef3f9e022b60543423e4680f00d0c13aa67caa354d4f2b91e31eea812b" exitCode=0 Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.140474 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqsdd" event={"ID":"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b","Type":"ContainerDied","Data":"79e583ef3f9e022b60543423e4680f00d0c13aa67caa354d4f2b91e31eea812b"} Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.140506 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqsdd" event={"ID":"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b","Type":"ContainerStarted","Data":"c2a038b0564fbe984aebb60d971a2190c42f410f8e1e00fe80b1a7d7dd556d09"} Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.147162 4860 generic.go:334] "Generic (PLEG): container finished" podID="6938dfcf-ed32-45c3-a2a6-04f0b60315ac" containerID="dd28a75d93dcce01b3124cff7f543d6bf285f23b193dc83cb804694ae1168249" exitCode=0 Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.147251 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vlqk5" event={"ID":"6938dfcf-ed32-45c3-a2a6-04f0b60315ac","Type":"ContainerDied","Data":"dd28a75d93dcce01b3124cff7f543d6bf285f23b193dc83cb804694ae1168249"} Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.147276 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vlqk5" event={"ID":"6938dfcf-ed32-45c3-a2a6-04f0b60315ac","Type":"ContainerStarted","Data":"40a1bf1857e2489755387482e560ec95d2023b8b0b6bed97867ae3ed3236acb4"} Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.151479 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" event={"ID":"d43a2987-8474-4e12-bf30-0540d082c40a","Type":"ContainerDied","Data":"b00de6f24089ef9453393997dab4ff369ee80be5da0395915b82653fd587475b"} Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.151511 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b00de6f24089ef9453393997dab4ff369ee80be5da0395915b82653fd587475b" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.151583 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-6l6fj" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.174550 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3ac471c4-a681-452a-bcb5-5b1ca1b43fbb","Type":"ContainerStarted","Data":"132b8f0874cca61f7f1f64d3f367d4b716cfd84ea8229013d64fdd376861a71f"} Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.174593 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3ac471c4-a681-452a-bcb5-5b1ca1b43fbb","Type":"ContainerStarted","Data":"06354c2bff48d1c8e957c87f3f648daf0f34bd7b83be8e587c8c1aa037d01546"} Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.174912 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.180613 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tcrzp"] Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.182068 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.187415 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.203955 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tcrzp"] Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.238375 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.23835397 podStartE2EDuration="2.23835397s" podCreationTimestamp="2026-01-23 08:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:41.234917953 +0000 UTC m=+47.862968148" watchObservedRunningTime="2026-01-23 08:15:41.23835397 +0000 UTC m=+47.866404165" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.317609 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.330042 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnh4z\" (UniqueName: \"kubernetes.io/projected/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-kube-api-access-nnh4z\") pod \"redhat-operators-tcrzp\" (UID: \"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f\") " pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.330202 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-catalog-content\") pod \"redhat-operators-tcrzp\" (UID: \"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f\") " pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.330284 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-utilities\") pod \"redhat-operators-tcrzp\" (UID: \"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f\") " pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.431884 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnh4z\" (UniqueName: \"kubernetes.io/projected/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-kube-api-access-nnh4z\") pod \"redhat-operators-tcrzp\" (UID: \"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f\") " pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.431992 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-catalog-content\") pod \"redhat-operators-tcrzp\" (UID: \"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f\") " pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.432109 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-utilities\") pod \"redhat-operators-tcrzp\" (UID: \"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f\") " pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.432657 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-catalog-content\") pod \"redhat-operators-tcrzp\" (UID: \"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f\") " pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.432959 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-utilities\") pod \"redhat-operators-tcrzp\" (UID: \"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f\") " pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.440957 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.444862 4860 patch_prober.go:28] interesting pod/router-default-5444994796-dkzf5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:15:41 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 23 08:15:41 crc kubenswrapper[4860]: [+]process-running ok Jan 23 08:15:41 crc kubenswrapper[4860]: healthz check failed Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.444921 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkzf5" podUID="9d48bd31-b86d-4b22-aa0c-278425b2dbb6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:15:41 crc kubenswrapper[4860]: E0123 08:15:41.452376 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:15:41 crc kubenswrapper[4860]: E0123 08:15:41.454733 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.460572 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnh4z\" (UniqueName: \"kubernetes.io/projected/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-kube-api-access-nnh4z\") pod \"redhat-operators-tcrzp\" (UID: \"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f\") " pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:15:41 crc kubenswrapper[4860]: E0123 08:15:41.464908 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:15:41 crc kubenswrapper[4860]: E0123 08:15:41.464984 4860 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" podUID="fef7b17c-47aa-41ef-83b0-753aba1cac55" containerName="kube-multus-additional-cni-plugins" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.528464 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.714821 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2x4xz"] Jan 23 08:15:41 crc kubenswrapper[4860]: W0123 08:15:41.724489 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13d8c162_6bfc_447f_b688_6f4e74687cd8.slice/crio-748fa058f78cd7fed22e20b03c4886443bd3a93989a0bd235fed97321fd370ba WatchSource:0}: Error finding container 748fa058f78cd7fed22e20b03c4886443bd3a93989a0bd235fed97321fd370ba: Status 404 returned error can't find the container with id 748fa058f78cd7fed22e20b03c4886443bd3a93989a0bd235fed97321fd370ba Jan 23 08:15:41 crc kubenswrapper[4860]: I0123 08:15:41.853811 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tcrzp"] Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.189470 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.190152 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.204094 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.204366 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.206209 4860 generic.go:334] "Generic (PLEG): container finished" podID="13d8c162-6bfc-447f-b688-6f4e74687cd8" containerID="53843d42271c87a40e04ef3ab9487edb07464ede4443dc0301c69aee8009c81c" exitCode=0 Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.206309 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x4xz" event={"ID":"13d8c162-6bfc-447f-b688-6f4e74687cd8","Type":"ContainerDied","Data":"53843d42271c87a40e04ef3ab9487edb07464ede4443dc0301c69aee8009c81c"} Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.206338 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x4xz" event={"ID":"13d8c162-6bfc-447f-b688-6f4e74687cd8","Type":"ContainerStarted","Data":"748fa058f78cd7fed22e20b03c4886443bd3a93989a0bd235fed97321fd370ba"} Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.210067 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.230619 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcrzp" event={"ID":"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f","Type":"ContainerStarted","Data":"1b3b048664ea216f02ff364eb6c6bfb2a6f30c39b1e74d4016eca986dffd89ec"} Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.241427 4860 generic.go:334] "Generic (PLEG): container finished" podID="3ac471c4-a681-452a-bcb5-5b1ca1b43fbb" containerID="132b8f0874cca61f7f1f64d3f367d4b716cfd84ea8229013d64fdd376861a71f" exitCode=0 Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.242127 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3ac471c4-a681-452a-bcb5-5b1ca1b43fbb","Type":"ContainerDied","Data":"132b8f0874cca61f7f1f64d3f367d4b716cfd84ea8229013d64fdd376861a71f"} Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.348367 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e508ae29-9716-45a6-acee-c1f9d6c5f15d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e508ae29-9716-45a6-acee-c1f9d6c5f15d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.348516 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e508ae29-9716-45a6-acee-c1f9d6c5f15d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e508ae29-9716-45a6-acee-c1f9d6c5f15d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.442628 4860 patch_prober.go:28] interesting pod/router-default-5444994796-dkzf5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:15:42 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 23 08:15:42 crc kubenswrapper[4860]: [+]process-running ok Jan 23 08:15:42 crc kubenswrapper[4860]: healthz check failed Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.442762 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkzf5" podUID="9d48bd31-b86d-4b22-aa0c-278425b2dbb6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.450619 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e508ae29-9716-45a6-acee-c1f9d6c5f15d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e508ae29-9716-45a6-acee-c1f9d6c5f15d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.451211 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e508ae29-9716-45a6-acee-c1f9d6c5f15d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e508ae29-9716-45a6-acee-c1f9d6c5f15d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.451334 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e508ae29-9716-45a6-acee-c1f9d6c5f15d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e508ae29-9716-45a6-acee-c1f9d6c5f15d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.471045 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e508ae29-9716-45a6-acee-c1f9d6c5f15d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e508ae29-9716-45a6-acee-c1f9d6c5f15d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.517281 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.843061 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:42 crc kubenswrapper[4860]: I0123 08:15:42.848364 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-fzzcx" Jan 23 08:15:43 crc kubenswrapper[4860]: I0123 08:15:43.102083 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 08:15:43 crc kubenswrapper[4860]: I0123 08:15:43.128512 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-w9rkq" Jan 23 08:15:43 crc kubenswrapper[4860]: I0123 08:15:43.259315 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e508ae29-9716-45a6-acee-c1f9d6c5f15d","Type":"ContainerStarted","Data":"c9c31480a18392decb7a2aabb34f4b2598c66de65a18ebd3b52e763ed64e6791"} Jan 23 08:15:43 crc kubenswrapper[4860]: I0123 08:15:43.261534 4860 generic.go:334] "Generic (PLEG): container finished" podID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" containerID="c726714b2fd451d9502ceb048bf4825aa99d36bddd225566d9e8fb4fb03010d0" exitCode=0 Jan 23 08:15:43 crc kubenswrapper[4860]: I0123 08:15:43.262683 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcrzp" event={"ID":"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f","Type":"ContainerDied","Data":"c726714b2fd451d9502ceb048bf4825aa99d36bddd225566d9e8fb4fb03010d0"} Jan 23 08:15:43 crc kubenswrapper[4860]: I0123 08:15:43.446619 4860 patch_prober.go:28] interesting pod/router-default-5444994796-dkzf5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:15:43 crc kubenswrapper[4860]: [-]has-synced failed: reason withheld Jan 23 08:15:43 crc kubenswrapper[4860]: [+]process-running ok Jan 23 08:15:43 crc kubenswrapper[4860]: healthz check failed Jan 23 08:15:43 crc kubenswrapper[4860]: I0123 08:15:43.446827 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dkzf5" podUID="9d48bd31-b86d-4b22-aa0c-278425b2dbb6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:15:43 crc kubenswrapper[4860]: I0123 08:15:43.630606 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 08:15:43 crc kubenswrapper[4860]: I0123 08:15:43.813277 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ac471c4-a681-452a-bcb5-5b1ca1b43fbb-kubelet-dir\") pod \"3ac471c4-a681-452a-bcb5-5b1ca1b43fbb\" (UID: \"3ac471c4-a681-452a-bcb5-5b1ca1b43fbb\") " Jan 23 08:15:43 crc kubenswrapper[4860]: I0123 08:15:43.813325 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ac471c4-a681-452a-bcb5-5b1ca1b43fbb-kube-api-access\") pod \"3ac471c4-a681-452a-bcb5-5b1ca1b43fbb\" (UID: \"3ac471c4-a681-452a-bcb5-5b1ca1b43fbb\") " Jan 23 08:15:43 crc kubenswrapper[4860]: I0123 08:15:43.813994 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ac471c4-a681-452a-bcb5-5b1ca1b43fbb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3ac471c4-a681-452a-bcb5-5b1ca1b43fbb" (UID: "3ac471c4-a681-452a-bcb5-5b1ca1b43fbb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:15:43 crc kubenswrapper[4860]: I0123 08:15:43.819179 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ac471c4-a681-452a-bcb5-5b1ca1b43fbb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3ac471c4-a681-452a-bcb5-5b1ca1b43fbb" (UID: "3ac471c4-a681-452a-bcb5-5b1ca1b43fbb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:43 crc kubenswrapper[4860]: I0123 08:15:43.914458 4860 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ac471c4-a681-452a-bcb5-5b1ca1b43fbb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:43 crc kubenswrapper[4860]: I0123 08:15:43.914491 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ac471c4-a681-452a-bcb5-5b1ca1b43fbb-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:44 crc kubenswrapper[4860]: I0123 08:15:44.285072 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3ac471c4-a681-452a-bcb5-5b1ca1b43fbb","Type":"ContainerDied","Data":"06354c2bff48d1c8e957c87f3f648daf0f34bd7b83be8e587c8c1aa037d01546"} Jan 23 08:15:44 crc kubenswrapper[4860]: I0123 08:15:44.285117 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06354c2bff48d1c8e957c87f3f648daf0f34bd7b83be8e587c8c1aa037d01546" Jan 23 08:15:44 crc kubenswrapper[4860]: I0123 08:15:44.285213 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 08:15:44 crc kubenswrapper[4860]: E0123 08:15:44.399196 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod3ac471c4_a681_452a_bcb5_5b1ca1b43fbb.slice/crio-06354c2bff48d1c8e957c87f3f648daf0f34bd7b83be8e587c8c1aa037d01546\": RecentStats: unable to find data in memory cache]" Jan 23 08:15:44 crc kubenswrapper[4860]: I0123 08:15:44.445476 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:44 crc kubenswrapper[4860]: I0123 08:15:44.448498 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dkzf5" Jan 23 08:15:45 crc kubenswrapper[4860]: I0123 08:15:45.301224 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e508ae29-9716-45a6-acee-c1f9d6c5f15d","Type":"ContainerStarted","Data":"3e2c2e72d23115c473c0942e32d81b15b0e4b3b01e35472cd7d75837588cf5fd"} Jan 23 08:15:45 crc kubenswrapper[4860]: I0123 08:15:45.319043 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.319002487 podStartE2EDuration="3.319002487s" podCreationTimestamp="2026-01-23 08:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:45.31593943 +0000 UTC m=+51.943989605" watchObservedRunningTime="2026-01-23 08:15:45.319002487 +0000 UTC m=+51.947052672" Jan 23 08:15:46 crc kubenswrapper[4860]: I0123 08:15:46.310396 4860 generic.go:334] "Generic (PLEG): container finished" podID="e508ae29-9716-45a6-acee-c1f9d6c5f15d" containerID="3e2c2e72d23115c473c0942e32d81b15b0e4b3b01e35472cd7d75837588cf5fd" exitCode=0 Jan 23 08:15:46 crc kubenswrapper[4860]: I0123 08:15:46.310439 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e508ae29-9716-45a6-acee-c1f9d6c5f15d","Type":"ContainerDied","Data":"3e2c2e72d23115c473c0942e32d81b15b0e4b3b01e35472cd7d75837588cf5fd"} Jan 23 08:15:46 crc kubenswrapper[4860]: I0123 08:15:46.450389 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:46 crc kubenswrapper[4860]: I0123 08:15:46.450479 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:46 crc kubenswrapper[4860]: I0123 08:15:46.455818 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:46 crc kubenswrapper[4860]: I0123 08:15:46.464256 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:46 crc kubenswrapper[4860]: I0123 08:15:46.500709 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 08:15:46 crc kubenswrapper[4860]: I0123 08:15:46.754821 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:46 crc kubenswrapper[4860]: I0123 08:15:46.754921 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:46 crc kubenswrapper[4860]: I0123 08:15:46.759863 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:46 crc kubenswrapper[4860]: I0123 08:15:46.760272 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:46 crc kubenswrapper[4860]: I0123 08:15:46.805866 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 08:15:46 crc kubenswrapper[4860]: W0123 08:15:46.881524 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-5524fbf4697ba2fbd08889493d2b6572356b23f60a4c4d58b98455f2f5bd8acb WatchSource:0}: Error finding container 5524fbf4697ba2fbd08889493d2b6572356b23f60a4c4d58b98455f2f5bd8acb: Status 404 returned error can't find the container with id 5524fbf4697ba2fbd08889493d2b6572356b23f60a4c4d58b98455f2f5bd8acb Jan 23 08:15:46 crc kubenswrapper[4860]: I0123 08:15:46.992419 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:47 crc kubenswrapper[4860]: W0123 08:15:47.297038 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-09d020dc559fd9cef47e814e9a0a5c8a448cdbf6301bf05a5f5ec28e4dce6257 WatchSource:0}: Error finding container 09d020dc559fd9cef47e814e9a0a5c8a448cdbf6301bf05a5f5ec28e4dce6257: Status 404 returned error can't find the container with id 09d020dc559fd9cef47e814e9a0a5c8a448cdbf6301bf05a5f5ec28e4dce6257 Jan 23 08:15:47 crc kubenswrapper[4860]: I0123 08:15:47.319006 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"09d020dc559fd9cef47e814e9a0a5c8a448cdbf6301bf05a5f5ec28e4dce6257"} Jan 23 08:15:47 crc kubenswrapper[4860]: I0123 08:15:47.321397 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5c2edf0d3fbbb796b0edc337db19156a1b4e4eddaf5701a4efee2689d9deca67"} Jan 23 08:15:47 crc kubenswrapper[4860]: I0123 08:15:47.322999 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5524fbf4697ba2fbd08889493d2b6572356b23f60a4c4d58b98455f2f5bd8acb"} Jan 23 08:15:47 crc kubenswrapper[4860]: I0123 08:15:47.604430 4860 patch_prober.go:28] interesting pod/console-f9d7485db-vtj8m container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 23 08:15:47 crc kubenswrapper[4860]: I0123 08:15:47.604481 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vtj8m" podUID="72589342-3cb8-4e43-bad0-f1726f70d77a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 23 08:15:47 crc kubenswrapper[4860]: I0123 08:15:47.643240 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 08:15:47 crc kubenswrapper[4860]: I0123 08:15:47.675929 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-nltcq" Jan 23 08:15:47 crc kubenswrapper[4860]: I0123 08:15:47.778312 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e508ae29-9716-45a6-acee-c1f9d6c5f15d-kubelet-dir\") pod \"e508ae29-9716-45a6-acee-c1f9d6c5f15d\" (UID: \"e508ae29-9716-45a6-acee-c1f9d6c5f15d\") " Jan 23 08:15:47 crc kubenswrapper[4860]: I0123 08:15:47.778448 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e508ae29-9716-45a6-acee-c1f9d6c5f15d-kube-api-access\") pod \"e508ae29-9716-45a6-acee-c1f9d6c5f15d\" (UID: \"e508ae29-9716-45a6-acee-c1f9d6c5f15d\") " Jan 23 08:15:47 crc kubenswrapper[4860]: I0123 08:15:47.779655 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e508ae29-9716-45a6-acee-c1f9d6c5f15d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e508ae29-9716-45a6-acee-c1f9d6c5f15d" (UID: "e508ae29-9716-45a6-acee-c1f9d6c5f15d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:15:47 crc kubenswrapper[4860]: I0123 08:15:47.787182 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e508ae29-9716-45a6-acee-c1f9d6c5f15d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e508ae29-9716-45a6-acee-c1f9d6c5f15d" (UID: "e508ae29-9716-45a6-acee-c1f9d6c5f15d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:47 crc kubenswrapper[4860]: I0123 08:15:47.879922 4860 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e508ae29-9716-45a6-acee-c1f9d6c5f15d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:47 crc kubenswrapper[4860]: I0123 08:15:47.879946 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e508ae29-9716-45a6-acee-c1f9d6c5f15d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:48 crc kubenswrapper[4860]: I0123 08:15:48.195461 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:15:48 crc kubenswrapper[4860]: I0123 08:15:48.214176 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 08:15:48 crc kubenswrapper[4860]: I0123 08:15:48.350060 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f50283d3bd608a337359194fd7f1f66a94151f558bf87d05ad4d1db82561805b"} Jan 23 08:15:48 crc kubenswrapper[4860]: I0123 08:15:48.354085 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c0058e3a0da5e66becdbdbbf99322c73550b0423faf79b8b7f3790b775f478be"} Jan 23 08:15:48 crc kubenswrapper[4860]: I0123 08:15:48.360009 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 08:15:48 crc kubenswrapper[4860]: I0123 08:15:48.360075 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e508ae29-9716-45a6-acee-c1f9d6c5f15d","Type":"ContainerDied","Data":"c9c31480a18392decb7a2aabb34f4b2598c66de65a18ebd3b52e763ed64e6791"} Jan 23 08:15:48 crc kubenswrapper[4860]: I0123 08:15:48.360136 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9c31480a18392decb7a2aabb34f4b2598c66de65a18ebd3b52e763ed64e6791" Jan 23 08:15:48 crc kubenswrapper[4860]: I0123 08:15:48.365385 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"041f589242d541544734627b443cf56fca01fba6ed2bdefee413edc1419d2458"} Jan 23 08:15:48 crc kubenswrapper[4860]: I0123 08:15:48.365684 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=0.36566492 podStartE2EDuration="365.66492ms" podCreationTimestamp="2026-01-23 08:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:48.364157922 +0000 UTC m=+54.992208107" watchObservedRunningTime="2026-01-23 08:15:48.36566492 +0000 UTC m=+54.993715125" Jan 23 08:15:49 crc kubenswrapper[4860]: I0123 08:15:49.373295 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:15:51 crc kubenswrapper[4860]: E0123 08:15:51.432464 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:15:51 crc kubenswrapper[4860]: E0123 08:15:51.435323 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:15:51 crc kubenswrapper[4860]: E0123 08:15:51.436782 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:15:51 crc kubenswrapper[4860]: E0123 08:15:51.436874 4860 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" podUID="fef7b17c-47aa-41ef-83b0-753aba1cac55" containerName="kube-multus-additional-cni-plugins" Jan 23 08:15:51 crc kubenswrapper[4860]: I0123 08:15:51.875374 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:15:57 crc kubenswrapper[4860]: I0123 08:15:57.608335 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:57 crc kubenswrapper[4860]: I0123 08:15:57.613639 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-vtj8m" Jan 23 08:15:59 crc kubenswrapper[4860]: I0123 08:15:59.279209 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:16:01 crc kubenswrapper[4860]: E0123 08:16:01.433719 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:16:01 crc kubenswrapper[4860]: E0123 08:16:01.436551 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:16:01 crc kubenswrapper[4860]: E0123 08:16:01.438315 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:16:01 crc kubenswrapper[4860]: E0123 08:16:01.438364 4860 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" podUID="fef7b17c-47aa-41ef-83b0-753aba1cac55" containerName="kube-multus-additional-cni-plugins" Jan 23 08:16:08 crc kubenswrapper[4860]: I0123 08:16:08.499149 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-fzmlp_fef7b17c-47aa-41ef-83b0-753aba1cac55/kube-multus-additional-cni-plugins/0.log" Jan 23 08:16:08 crc kubenswrapper[4860]: I0123 08:16:08.499667 4860 generic.go:334] "Generic (PLEG): container finished" podID="fef7b17c-47aa-41ef-83b0-753aba1cac55" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" exitCode=137 Jan 23 08:16:08 crc kubenswrapper[4860]: I0123 08:16:08.499704 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" event={"ID":"fef7b17c-47aa-41ef-83b0-753aba1cac55","Type":"ContainerDied","Data":"f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99"} Jan 23 08:16:08 crc kubenswrapper[4860]: I0123 08:16:08.669760 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 23 08:16:11 crc kubenswrapper[4860]: I0123 08:16:11.164461 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lv7rq" Jan 23 08:16:11 crc kubenswrapper[4860]: I0123 08:16:11.216664 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=3.216633968 podStartE2EDuration="3.216633968s" podCreationTimestamp="2026-01-23 08:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:16:11.213581387 +0000 UTC m=+77.841631582" watchObservedRunningTime="2026-01-23 08:16:11.216633968 +0000 UTC m=+77.844684193" Jan 23 08:16:11 crc kubenswrapper[4860]: E0123 08:16:11.430572 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99 is running failed: container process not found" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:16:11 crc kubenswrapper[4860]: E0123 08:16:11.431337 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99 is running failed: container process not found" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:16:11 crc kubenswrapper[4860]: E0123 08:16:11.431671 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99 is running failed: container process not found" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:16:11 crc kubenswrapper[4860]: E0123 08:16:11.431753 4860 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" podUID="fef7b17c-47aa-41ef-83b0-753aba1cac55" containerName="kube-multus-additional-cni-plugins" Jan 23 08:16:20 crc kubenswrapper[4860]: I0123 08:16:20.000493 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 08:16:20 crc kubenswrapper[4860]: E0123 08:16:20.001308 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ac471c4-a681-452a-bcb5-5b1ca1b43fbb" containerName="pruner" Jan 23 08:16:20 crc kubenswrapper[4860]: I0123 08:16:20.001328 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ac471c4-a681-452a-bcb5-5b1ca1b43fbb" containerName="pruner" Jan 23 08:16:20 crc kubenswrapper[4860]: E0123 08:16:20.001348 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e508ae29-9716-45a6-acee-c1f9d6c5f15d" containerName="pruner" Jan 23 08:16:20 crc kubenswrapper[4860]: I0123 08:16:20.001360 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e508ae29-9716-45a6-acee-c1f9d6c5f15d" containerName="pruner" Jan 23 08:16:20 crc kubenswrapper[4860]: I0123 08:16:20.001554 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e508ae29-9716-45a6-acee-c1f9d6c5f15d" containerName="pruner" Jan 23 08:16:20 crc kubenswrapper[4860]: I0123 08:16:20.001585 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ac471c4-a681-452a-bcb5-5b1ca1b43fbb" containerName="pruner" Jan 23 08:16:20 crc kubenswrapper[4860]: I0123 08:16:20.005968 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 08:16:20 crc kubenswrapper[4860]: I0123 08:16:20.015288 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 08:16:20 crc kubenswrapper[4860]: I0123 08:16:20.020973 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 08:16:20 crc kubenswrapper[4860]: I0123 08:16:20.029738 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 08:16:20 crc kubenswrapper[4860]: I0123 08:16:20.093120 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d19be086-d9bd-41eb-8e5b-16f2e1d8003a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d19be086-d9bd-41eb-8e5b-16f2e1d8003a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 08:16:20 crc kubenswrapper[4860]: I0123 08:16:20.093198 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d19be086-d9bd-41eb-8e5b-16f2e1d8003a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d19be086-d9bd-41eb-8e5b-16f2e1d8003a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 08:16:20 crc kubenswrapper[4860]: I0123 08:16:20.194407 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d19be086-d9bd-41eb-8e5b-16f2e1d8003a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d19be086-d9bd-41eb-8e5b-16f2e1d8003a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 08:16:20 crc kubenswrapper[4860]: I0123 08:16:20.194568 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d19be086-d9bd-41eb-8e5b-16f2e1d8003a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d19be086-d9bd-41eb-8e5b-16f2e1d8003a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 08:16:20 crc kubenswrapper[4860]: I0123 08:16:20.194777 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d19be086-d9bd-41eb-8e5b-16f2e1d8003a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d19be086-d9bd-41eb-8e5b-16f2e1d8003a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 08:16:20 crc kubenswrapper[4860]: I0123 08:16:20.620518 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d19be086-d9bd-41eb-8e5b-16f2e1d8003a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d19be086-d9bd-41eb-8e5b-16f2e1d8003a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 08:16:20 crc kubenswrapper[4860]: I0123 08:16:20.920192 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 08:16:21 crc kubenswrapper[4860]: E0123 08:16:21.431311 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99 is running failed: container process not found" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:16:21 crc kubenswrapper[4860]: E0123 08:16:21.432309 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99 is running failed: container process not found" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:16:21 crc kubenswrapper[4860]: E0123 08:16:21.432981 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99 is running failed: container process not found" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:16:21 crc kubenswrapper[4860]: E0123 08:16:21.433095 4860 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" podUID="fef7b17c-47aa-41ef-83b0-753aba1cac55" containerName="kube-multus-additional-cni-plugins" Jan 23 08:16:25 crc kubenswrapper[4860]: I0123 08:16:25.595851 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 08:16:25 crc kubenswrapper[4860]: I0123 08:16:25.596964 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 08:16:25 crc kubenswrapper[4860]: I0123 08:16:25.609369 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 08:16:25 crc kubenswrapper[4860]: I0123 08:16:25.771489 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0d0b7153-b2e7-4920-96bf-422b58e8b3de-var-lock\") pod \"installer-9-crc\" (UID: \"0d0b7153-b2e7-4920-96bf-422b58e8b3de\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 08:16:25 crc kubenswrapper[4860]: I0123 08:16:25.771553 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d0b7153-b2e7-4920-96bf-422b58e8b3de-kube-api-access\") pod \"installer-9-crc\" (UID: \"0d0b7153-b2e7-4920-96bf-422b58e8b3de\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 08:16:25 crc kubenswrapper[4860]: I0123 08:16:25.771731 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d0b7153-b2e7-4920-96bf-422b58e8b3de-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0d0b7153-b2e7-4920-96bf-422b58e8b3de\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 08:16:25 crc kubenswrapper[4860]: I0123 08:16:25.873631 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d0b7153-b2e7-4920-96bf-422b58e8b3de-kube-api-access\") pod \"installer-9-crc\" (UID: \"0d0b7153-b2e7-4920-96bf-422b58e8b3de\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 08:16:25 crc kubenswrapper[4860]: I0123 08:16:25.873739 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d0b7153-b2e7-4920-96bf-422b58e8b3de-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0d0b7153-b2e7-4920-96bf-422b58e8b3de\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 08:16:25 crc kubenswrapper[4860]: I0123 08:16:25.873863 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0d0b7153-b2e7-4920-96bf-422b58e8b3de-var-lock\") pod \"installer-9-crc\" (UID: \"0d0b7153-b2e7-4920-96bf-422b58e8b3de\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 08:16:25 crc kubenswrapper[4860]: I0123 08:16:25.873908 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d0b7153-b2e7-4920-96bf-422b58e8b3de-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0d0b7153-b2e7-4920-96bf-422b58e8b3de\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 08:16:25 crc kubenswrapper[4860]: I0123 08:16:25.873952 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0d0b7153-b2e7-4920-96bf-422b58e8b3de-var-lock\") pod \"installer-9-crc\" (UID: \"0d0b7153-b2e7-4920-96bf-422b58e8b3de\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 08:16:25 crc kubenswrapper[4860]: I0123 08:16:25.896448 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d0b7153-b2e7-4920-96bf-422b58e8b3de-kube-api-access\") pod \"installer-9-crc\" (UID: \"0d0b7153-b2e7-4920-96bf-422b58e8b3de\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 08:16:25 crc kubenswrapper[4860]: I0123 08:16:25.979075 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 08:16:30 crc kubenswrapper[4860]: I0123 08:16:30.168061 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 08:16:31 crc kubenswrapper[4860]: E0123 08:16:31.430255 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99 is running failed: container process not found" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:16:31 crc kubenswrapper[4860]: E0123 08:16:31.430831 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99 is running failed: container process not found" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:16:31 crc kubenswrapper[4860]: E0123 08:16:31.432382 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99 is running failed: container process not found" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:16:31 crc kubenswrapper[4860]: E0123 08:16:31.432489 4860 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" podUID="fef7b17c-47aa-41ef-83b0-753aba1cac55" containerName="kube-multus-additional-cni-plugins" Jan 23 08:16:37 crc kubenswrapper[4860]: E0123 08:16:37.361085 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 08:16:37 crc kubenswrapper[4860]: E0123 08:16:37.361737 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2g29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-44v8b_openshift-marketplace(f2bd04fa-7e4e-4a22-9b93-418f22e296b2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 08:16:37 crc kubenswrapper[4860]: E0123 08:16:37.364074 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-44v8b" podUID="f2bd04fa-7e4e-4a22-9b93-418f22e296b2" Jan 23 08:16:41 crc kubenswrapper[4860]: E0123 08:16:41.431046 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99 is running failed: container process not found" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:16:41 crc kubenswrapper[4860]: E0123 08:16:41.431991 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99 is running failed: container process not found" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:16:41 crc kubenswrapper[4860]: E0123 08:16:41.432240 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99 is running failed: container process not found" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:16:41 crc kubenswrapper[4860]: E0123 08:16:41.432265 4860 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" podUID="fef7b17c-47aa-41ef-83b0-753aba1cac55" containerName="kube-multus-additional-cni-plugins" Jan 23 08:16:42 crc kubenswrapper[4860]: E0123 08:16:42.021123 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-44v8b" podUID="f2bd04fa-7e4e-4a22-9b93-418f22e296b2" Jan 23 08:16:42 crc kubenswrapper[4860]: E0123 08:16:42.112854 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 08:16:42 crc kubenswrapper[4860]: E0123 08:16:42.113061 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bc69p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2x4xz_openshift-marketplace(13d8c162-6bfc-447f-b688-6f4e74687cd8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 08:16:42 crc kubenswrapper[4860]: E0123 08:16:42.114383 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2x4xz" podUID="13d8c162-6bfc-447f-b688-6f4e74687cd8" Jan 23 08:16:43 crc kubenswrapper[4860]: E0123 08:16:43.858155 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2x4xz" podUID="13d8c162-6bfc-447f-b688-6f4e74687cd8" Jan 23 08:16:43 crc kubenswrapper[4860]: E0123 08:16:43.950208 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 08:16:43 crc kubenswrapper[4860]: E0123 08:16:43.950385 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x42l2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wqsdd_openshift-marketplace(92e9a494-d0d1-4bb9-8cc9-86e044e7a75b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 08:16:43 crc kubenswrapper[4860]: E0123 08:16:43.951766 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wqsdd" podUID="92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" Jan 23 08:16:43 crc kubenswrapper[4860]: E0123 08:16:43.990296 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 08:16:43 crc kubenswrapper[4860]: E0123 08:16:43.990468 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2gvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-zrqw5_openshift-marketplace(ce3742d9-24a7-4b15-9301-8d03596ae37b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 08:16:43 crc kubenswrapper[4860]: E0123 08:16:43.991636 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-zrqw5" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" Jan 23 08:16:44 crc kubenswrapper[4860]: E0123 08:16:44.006546 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 08:16:44 crc kubenswrapper[4860]: E0123 08:16:44.006680 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcpfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vlqk5_openshift-marketplace(6938dfcf-ed32-45c3-a2a6-04f0b60315ac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 08:16:44 crc kubenswrapper[4860]: E0123 08:16:44.008119 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vlqk5" podUID="6938dfcf-ed32-45c3-a2a6-04f0b60315ac" Jan 23 08:16:44 crc kubenswrapper[4860]: E0123 08:16:44.012951 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 08:16:44 crc kubenswrapper[4860]: E0123 08:16:44.013121 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnh4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tcrzp_openshift-marketplace(b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 08:16:44 crc kubenswrapper[4860]: E0123 08:16:44.014289 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tcrzp" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" Jan 23 08:16:45 crc kubenswrapper[4860]: E0123 08:16:45.374562 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-zrqw5" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" Jan 23 08:16:45 crc kubenswrapper[4860]: E0123 08:16:45.374675 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vlqk5" podUID="6938dfcf-ed32-45c3-a2a6-04f0b60315ac" Jan 23 08:16:45 crc kubenswrapper[4860]: E0123 08:16:45.374686 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wqsdd" podUID="92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" Jan 23 08:16:45 crc kubenswrapper[4860]: E0123 08:16:45.374732 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tcrzp" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.451774 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-fzmlp_fef7b17c-47aa-41ef-83b0-753aba1cac55/kube-multus-additional-cni-plugins/0.log" Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.451960 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:16:45 crc kubenswrapper[4860]: E0123 08:16:45.479523 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 08:16:45 crc kubenswrapper[4860]: E0123 08:16:45.482364 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 08:16:45 crc kubenswrapper[4860]: E0123 08:16:45.482573 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pwz2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dws9d_openshift-marketplace(f61b81b4-d438-4fd0-a8c0-4f609b4d37c5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 08:16:45 crc kubenswrapper[4860]: E0123 08:16:45.480167 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5tz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cs456_openshift-marketplace(94cbcca6-2f65-40ab-ab9f-37ac19db1f4b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 08:16:45 crc kubenswrapper[4860]: E0123 08:16:45.483839 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dws9d" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" Jan 23 08:16:45 crc kubenswrapper[4860]: E0123 08:16:45.484095 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cs456" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.638977 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fef7b17c-47aa-41ef-83b0-753aba1cac55-tuning-conf-dir\") pod \"fef7b17c-47aa-41ef-83b0-753aba1cac55\" (UID: \"fef7b17c-47aa-41ef-83b0-753aba1cac55\") " Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.639322 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/fef7b17c-47aa-41ef-83b0-753aba1cac55-ready\") pod \"fef7b17c-47aa-41ef-83b0-753aba1cac55\" (UID: \"fef7b17c-47aa-41ef-83b0-753aba1cac55\") " Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.639384 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtpht\" (UniqueName: \"kubernetes.io/projected/fef7b17c-47aa-41ef-83b0-753aba1cac55-kube-api-access-wtpht\") pod \"fef7b17c-47aa-41ef-83b0-753aba1cac55\" (UID: \"fef7b17c-47aa-41ef-83b0-753aba1cac55\") " Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.639460 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fef7b17c-47aa-41ef-83b0-753aba1cac55-cni-sysctl-allowlist\") pod \"fef7b17c-47aa-41ef-83b0-753aba1cac55\" (UID: \"fef7b17c-47aa-41ef-83b0-753aba1cac55\") " Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.640621 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fef7b17c-47aa-41ef-83b0-753aba1cac55-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "fef7b17c-47aa-41ef-83b0-753aba1cac55" (UID: "fef7b17c-47aa-41ef-83b0-753aba1cac55"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.640673 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fef7b17c-47aa-41ef-83b0-753aba1cac55-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "fef7b17c-47aa-41ef-83b0-753aba1cac55" (UID: "fef7b17c-47aa-41ef-83b0-753aba1cac55"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.641055 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef7b17c-47aa-41ef-83b0-753aba1cac55-ready" (OuterVolumeSpecName: "ready") pod "fef7b17c-47aa-41ef-83b0-753aba1cac55" (UID: "fef7b17c-47aa-41ef-83b0-753aba1cac55"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.646516 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef7b17c-47aa-41ef-83b0-753aba1cac55-kube-api-access-wtpht" (OuterVolumeSpecName: "kube-api-access-wtpht") pod "fef7b17c-47aa-41ef-83b0-753aba1cac55" (UID: "fef7b17c-47aa-41ef-83b0-753aba1cac55"). InnerVolumeSpecName "kube-api-access-wtpht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.712428 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-fzmlp_fef7b17c-47aa-41ef-83b0-753aba1cac55/kube-multus-additional-cni-plugins/0.log" Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.713212 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.713339 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-fzmlp" event={"ID":"fef7b17c-47aa-41ef-83b0-753aba1cac55","Type":"ContainerDied","Data":"f0d054fd4a78303b4b14ae5fd859c36afe0c1d0a91228a85fdf1eb20bdf74922"} Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.713441 4860 scope.go:117] "RemoveContainer" containerID="f28c693c2fb875c988611298876bb71547cbe939ef025e45672c585bf4f1bf99" Jan 23 08:16:45 crc kubenswrapper[4860]: E0123 08:16:45.714882 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cs456" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" Jan 23 08:16:45 crc kubenswrapper[4860]: E0123 08:16:45.716302 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dws9d" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.740422 4860 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/fef7b17c-47aa-41ef-83b0-753aba1cac55-ready\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.740446 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtpht\" (UniqueName: \"kubernetes.io/projected/fef7b17c-47aa-41ef-83b0-753aba1cac55-kube-api-access-wtpht\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.740457 4860 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fef7b17c-47aa-41ef-83b0-753aba1cac55-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.740466 4860 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fef7b17c-47aa-41ef-83b0-753aba1cac55-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.752443 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-fzmlp"] Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.755777 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-fzmlp"] Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.785936 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 08:16:45 crc kubenswrapper[4860]: W0123 08:16:45.799058 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0d0b7153_b2e7_4920_96bf_422b58e8b3de.slice/crio-f36c746bf3456f627eb09bef2b29986483d4735f14bda160fe2c0edaa86cd5b1 WatchSource:0}: Error finding container f36c746bf3456f627eb09bef2b29986483d4735f14bda160fe2c0edaa86cd5b1: Status 404 returned error can't find the container with id f36c746bf3456f627eb09bef2b29986483d4735f14bda160fe2c0edaa86cd5b1 Jan 23 08:16:45 crc kubenswrapper[4860]: I0123 08:16:45.838121 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 08:16:46 crc kubenswrapper[4860]: I0123 08:16:46.718368 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0d0b7153-b2e7-4920-96bf-422b58e8b3de","Type":"ContainerStarted","Data":"fdc0c1bc7658cf9f916b4f332e5e6c59486cf761666c340b0fa83480fbf34ac9"} Jan 23 08:16:46 crc kubenswrapper[4860]: I0123 08:16:46.718628 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0d0b7153-b2e7-4920-96bf-422b58e8b3de","Type":"ContainerStarted","Data":"f36c746bf3456f627eb09bef2b29986483d4735f14bda160fe2c0edaa86cd5b1"} Jan 23 08:16:46 crc kubenswrapper[4860]: I0123 08:16:46.720773 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d19be086-d9bd-41eb-8e5b-16f2e1d8003a","Type":"ContainerStarted","Data":"6d2fa68d83c590699808f113e40b45fa5c2ccd66dfbc642810a14ff58cb5d72c"} Jan 23 08:16:46 crc kubenswrapper[4860]: I0123 08:16:46.720820 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d19be086-d9bd-41eb-8e5b-16f2e1d8003a","Type":"ContainerStarted","Data":"e8d58e509ef78a396afc588386f9812e9b523aac4a0412ed13a557ebb0e547cb"} Jan 23 08:16:46 crc kubenswrapper[4860]: I0123 08:16:46.733196 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=21.733176736 podStartE2EDuration="21.733176736s" podCreationTimestamp="2026-01-23 08:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:16:46.731593484 +0000 UTC m=+113.359643679" watchObservedRunningTime="2026-01-23 08:16:46.733176736 +0000 UTC m=+113.361226931" Jan 23 08:16:47 crc kubenswrapper[4860]: I0123 08:16:47.665746 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fef7b17c-47aa-41ef-83b0-753aba1cac55" path="/var/lib/kubelet/pods/fef7b17c-47aa-41ef-83b0-753aba1cac55/volumes" Jan 23 08:16:47 crc kubenswrapper[4860]: I0123 08:16:47.730999 4860 generic.go:334] "Generic (PLEG): container finished" podID="d19be086-d9bd-41eb-8e5b-16f2e1d8003a" containerID="6d2fa68d83c590699808f113e40b45fa5c2ccd66dfbc642810a14ff58cb5d72c" exitCode=0 Jan 23 08:16:47 crc kubenswrapper[4860]: I0123 08:16:47.731133 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d19be086-d9bd-41eb-8e5b-16f2e1d8003a","Type":"ContainerDied","Data":"6d2fa68d83c590699808f113e40b45fa5c2ccd66dfbc642810a14ff58cb5d72c"} Jan 23 08:16:48 crc kubenswrapper[4860]: I0123 08:16:48.997164 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 08:16:49 crc kubenswrapper[4860]: I0123 08:16:49.187956 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d19be086-d9bd-41eb-8e5b-16f2e1d8003a-kubelet-dir\") pod \"d19be086-d9bd-41eb-8e5b-16f2e1d8003a\" (UID: \"d19be086-d9bd-41eb-8e5b-16f2e1d8003a\") " Jan 23 08:16:49 crc kubenswrapper[4860]: I0123 08:16:49.188093 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d19be086-d9bd-41eb-8e5b-16f2e1d8003a-kube-api-access\") pod \"d19be086-d9bd-41eb-8e5b-16f2e1d8003a\" (UID: \"d19be086-d9bd-41eb-8e5b-16f2e1d8003a\") " Jan 23 08:16:49 crc kubenswrapper[4860]: I0123 08:16:49.188128 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d19be086-d9bd-41eb-8e5b-16f2e1d8003a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d19be086-d9bd-41eb-8e5b-16f2e1d8003a" (UID: "d19be086-d9bd-41eb-8e5b-16f2e1d8003a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:16:49 crc kubenswrapper[4860]: I0123 08:16:49.188302 4860 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d19be086-d9bd-41eb-8e5b-16f2e1d8003a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:49 crc kubenswrapper[4860]: I0123 08:16:49.194323 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19be086-d9bd-41eb-8e5b-16f2e1d8003a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d19be086-d9bd-41eb-8e5b-16f2e1d8003a" (UID: "d19be086-d9bd-41eb-8e5b-16f2e1d8003a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:49 crc kubenswrapper[4860]: I0123 08:16:49.289632 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d19be086-d9bd-41eb-8e5b-16f2e1d8003a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:49 crc kubenswrapper[4860]: I0123 08:16:49.746347 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d19be086-d9bd-41eb-8e5b-16f2e1d8003a","Type":"ContainerDied","Data":"e8d58e509ef78a396afc588386f9812e9b523aac4a0412ed13a557ebb0e547cb"} Jan 23 08:16:49 crc kubenswrapper[4860]: I0123 08:16:49.746391 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8d58e509ef78a396afc588386f9812e9b523aac4a0412ed13a557ebb0e547cb" Jan 23 08:16:49 crc kubenswrapper[4860]: I0123 08:16:49.746453 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 08:16:54 crc kubenswrapper[4860]: I0123 08:16:54.771608 4860 generic.go:334] "Generic (PLEG): container finished" podID="f2bd04fa-7e4e-4a22-9b93-418f22e296b2" containerID="7676edafe78ba02e0ac92685523271df21b2402728ed8a7db80119d38d7c0fe7" exitCode=0 Jan 23 08:16:54 crc kubenswrapper[4860]: I0123 08:16:54.772129 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44v8b" event={"ID":"f2bd04fa-7e4e-4a22-9b93-418f22e296b2","Type":"ContainerDied","Data":"7676edafe78ba02e0ac92685523271df21b2402728ed8a7db80119d38d7c0fe7"} Jan 23 08:16:56 crc kubenswrapper[4860]: I0123 08:16:56.789174 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x4xz" event={"ID":"13d8c162-6bfc-447f-b688-6f4e74687cd8","Type":"ContainerStarted","Data":"ca415b17882df9ae68d9e12ce7e4dfaa0b765d2bc322d7bd4d2fa5cc9e694581"} Jan 23 08:16:56 crc kubenswrapper[4860]: I0123 08:16:56.795383 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44v8b" event={"ID":"f2bd04fa-7e4e-4a22-9b93-418f22e296b2","Type":"ContainerStarted","Data":"ef0f28b7b39e59df503efc5e597ad78a7653dd9b92fcb767014264277d9756d1"} Jan 23 08:16:56 crc kubenswrapper[4860]: I0123 08:16:56.840593 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-44v8b" podStartSLOduration=2.25997336 podStartE2EDuration="1m19.840578914s" podCreationTimestamp="2026-01-23 08:15:37 +0000 UTC" firstStartedPulling="2026-01-23 08:15:38.996207708 +0000 UTC m=+45.624257893" lastFinishedPulling="2026-01-23 08:16:56.576813222 +0000 UTC m=+123.204863447" observedRunningTime="2026-01-23 08:16:56.839161177 +0000 UTC m=+123.467211352" watchObservedRunningTime="2026-01-23 08:16:56.840578914 +0000 UTC m=+123.468629099" Jan 23 08:16:57 crc kubenswrapper[4860]: I0123 08:16:57.808681 4860 generic.go:334] "Generic (PLEG): container finished" podID="13d8c162-6bfc-447f-b688-6f4e74687cd8" containerID="ca415b17882df9ae68d9e12ce7e4dfaa0b765d2bc322d7bd4d2fa5cc9e694581" exitCode=0 Jan 23 08:16:57 crc kubenswrapper[4860]: I0123 08:16:57.808737 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x4xz" event={"ID":"13d8c162-6bfc-447f-b688-6f4e74687cd8","Type":"ContainerDied","Data":"ca415b17882df9ae68d9e12ce7e4dfaa0b765d2bc322d7bd4d2fa5cc9e694581"} Jan 23 08:16:57 crc kubenswrapper[4860]: I0123 08:16:57.900258 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:16:57 crc kubenswrapper[4860]: I0123 08:16:57.900304 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:16:58 crc kubenswrapper[4860]: I0123 08:16:58.816588 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x4xz" event={"ID":"13d8c162-6bfc-447f-b688-6f4e74687cd8","Type":"ContainerStarted","Data":"8edc210866a1bbbe3bea03d696219fbb1e659c61e8989ef9d0dba56d1d41d6c3"} Jan 23 08:16:59 crc kubenswrapper[4860]: I0123 08:16:59.014655 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-44v8b" podUID="f2bd04fa-7e4e-4a22-9b93-418f22e296b2" containerName="registry-server" probeResult="failure" output=< Jan 23 08:16:59 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 23 08:16:59 crc kubenswrapper[4860]: > Jan 23 08:16:59 crc kubenswrapper[4860]: I0123 08:16:59.838450 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2x4xz" podStartSLOduration=3.513030475 podStartE2EDuration="1m19.838433181s" podCreationTimestamp="2026-01-23 08:15:40 +0000 UTC" firstStartedPulling="2026-01-23 08:15:42.209430752 +0000 UTC m=+48.837480937" lastFinishedPulling="2026-01-23 08:16:58.534833438 +0000 UTC m=+125.162883643" observedRunningTime="2026-01-23 08:16:59.837830615 +0000 UTC m=+126.465880810" watchObservedRunningTime="2026-01-23 08:16:59.838433181 +0000 UTC m=+126.466483366" Jan 23 08:17:01 crc kubenswrapper[4860]: I0123 08:17:01.189816 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:17:01 crc kubenswrapper[4860]: I0123 08:17:01.189887 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:17:02 crc kubenswrapper[4860]: I0123 08:17:02.251250 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2x4xz" podUID="13d8c162-6bfc-447f-b688-6f4e74687cd8" containerName="registry-server" probeResult="failure" output=< Jan 23 08:17:02 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 23 08:17:02 crc kubenswrapper[4860]: > Jan 23 08:17:08 crc kubenswrapper[4860]: I0123 08:17:08.920621 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:17:08 crc kubenswrapper[4860]: I0123 08:17:08.971891 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:17:11 crc kubenswrapper[4860]: I0123 08:17:11.240235 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:17:11 crc kubenswrapper[4860]: I0123 08:17:11.280484 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:17:14 crc kubenswrapper[4860]: I0123 08:17:14.909722 4860 generic.go:334] "Generic (PLEG): container finished" podID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" containerID="cc2f29140d62d5463ce0408b62d2bc3f1cff06f0fa09497c54b4209b7cc48506" exitCode=0 Jan 23 08:17:14 crc kubenswrapper[4860]: I0123 08:17:14.909841 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dws9d" event={"ID":"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5","Type":"ContainerDied","Data":"cc2f29140d62d5463ce0408b62d2bc3f1cff06f0fa09497c54b4209b7cc48506"} Jan 23 08:17:14 crc kubenswrapper[4860]: I0123 08:17:14.912713 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcrzp" event={"ID":"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f","Type":"ContainerStarted","Data":"bb5c54edbf79b68b6ff550e0cb53c6c9e7c47f79fbb1e42e26b8c4e1ca6a6d28"} Jan 23 08:17:14 crc kubenswrapper[4860]: I0123 08:17:14.914731 4860 generic.go:334] "Generic (PLEG): container finished" podID="ce3742d9-24a7-4b15-9301-8d03596ae37b" containerID="72d9248d7abbc8039db720be0bfa76519665348a70824aa7d322d39262e5ce27" exitCode=0 Jan 23 08:17:14 crc kubenswrapper[4860]: I0123 08:17:14.914783 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqw5" event={"ID":"ce3742d9-24a7-4b15-9301-8d03596ae37b","Type":"ContainerDied","Data":"72d9248d7abbc8039db720be0bfa76519665348a70824aa7d322d39262e5ce27"} Jan 23 08:17:14 crc kubenswrapper[4860]: I0123 08:17:14.917815 4860 generic.go:334] "Generic (PLEG): container finished" podID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" containerID="6ae08939fb4c8f3975be36eeed5a5ce8ab73b15856a589d516f42c18307ff2f1" exitCode=0 Jan 23 08:17:14 crc kubenswrapper[4860]: I0123 08:17:14.917875 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cs456" event={"ID":"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b","Type":"ContainerDied","Data":"6ae08939fb4c8f3975be36eeed5a5ce8ab73b15856a589d516f42c18307ff2f1"} Jan 23 08:17:14 crc kubenswrapper[4860]: I0123 08:17:14.921398 4860 generic.go:334] "Generic (PLEG): container finished" podID="92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" containerID="8e4d7769a76f1642866b75757e7d700aa75c9dc6f51abc09a6e50840c6e8faa8" exitCode=0 Jan 23 08:17:14 crc kubenswrapper[4860]: I0123 08:17:14.921446 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqsdd" event={"ID":"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b","Type":"ContainerDied","Data":"8e4d7769a76f1642866b75757e7d700aa75c9dc6f51abc09a6e50840c6e8faa8"} Jan 23 08:17:14 crc kubenswrapper[4860]: I0123 08:17:14.924745 4860 generic.go:334] "Generic (PLEG): container finished" podID="6938dfcf-ed32-45c3-a2a6-04f0b60315ac" containerID="42765b2438eb06f38cd6b310bcd0d9c693675ad16eb0c9ead5ce252e7c671704" exitCode=0 Jan 23 08:17:14 crc kubenswrapper[4860]: I0123 08:17:14.924779 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vlqk5" event={"ID":"6938dfcf-ed32-45c3-a2a6-04f0b60315ac","Type":"ContainerDied","Data":"42765b2438eb06f38cd6b310bcd0d9c693675ad16eb0c9ead5ce252e7c671704"} Jan 23 08:17:15 crc kubenswrapper[4860]: I0123 08:17:15.941788 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vlqk5" event={"ID":"6938dfcf-ed32-45c3-a2a6-04f0b60315ac","Type":"ContainerStarted","Data":"8397f0e97db5c35fb66c694b5f0e7c327a2680a288de0937aea5a96e378158d4"} Jan 23 08:17:15 crc kubenswrapper[4860]: I0123 08:17:15.946573 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dws9d" event={"ID":"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5","Type":"ContainerStarted","Data":"364ebaa50deb592c5d46bea217544a97bf6544716746f6e97617c31f1e2dfcba"} Jan 23 08:17:15 crc kubenswrapper[4860]: I0123 08:17:15.948969 4860 generic.go:334] "Generic (PLEG): container finished" podID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" containerID="bb5c54edbf79b68b6ff550e0cb53c6c9e7c47f79fbb1e42e26b8c4e1ca6a6d28" exitCode=0 Jan 23 08:17:15 crc kubenswrapper[4860]: I0123 08:17:15.949030 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcrzp" event={"ID":"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f","Type":"ContainerDied","Data":"bb5c54edbf79b68b6ff550e0cb53c6c9e7c47f79fbb1e42e26b8c4e1ca6a6d28"} Jan 23 08:17:15 crc kubenswrapper[4860]: I0123 08:17:15.953619 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqw5" event={"ID":"ce3742d9-24a7-4b15-9301-8d03596ae37b","Type":"ContainerStarted","Data":"a55416e3905948d1646c69581869f5aa698f3aba6d6c5097edf17075b9c0c2a2"} Jan 23 08:17:15 crc kubenswrapper[4860]: I0123 08:17:15.956639 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cs456" event={"ID":"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b","Type":"ContainerStarted","Data":"ac5a2f5dc3d2ac5e95d87c73e2bb3d34f086b1266774e1104699a53221b9ea43"} Jan 23 08:17:15 crc kubenswrapper[4860]: I0123 08:17:15.969426 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqsdd" event={"ID":"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b","Type":"ContainerStarted","Data":"a8708a71e5af1e5399c3c0c1b9783feb9530b83aa8b9ef3c5a1439f97606b60d"} Jan 23 08:17:15 crc kubenswrapper[4860]: I0123 08:17:15.977721 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vlqk5" podStartSLOduration=2.624031058 podStartE2EDuration="1m36.977697028s" podCreationTimestamp="2026-01-23 08:15:39 +0000 UTC" firstStartedPulling="2026-01-23 08:15:41.148944421 +0000 UTC m=+47.776994606" lastFinishedPulling="2026-01-23 08:17:15.502610391 +0000 UTC m=+142.130660576" observedRunningTime="2026-01-23 08:17:15.977466301 +0000 UTC m=+142.605516486" watchObservedRunningTime="2026-01-23 08:17:15.977697028 +0000 UTC m=+142.605747213" Jan 23 08:17:16 crc kubenswrapper[4860]: I0123 08:17:16.031604 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cs456" podStartSLOduration=2.71413874 podStartE2EDuration="1m38.031579868s" podCreationTimestamp="2026-01-23 08:15:38 +0000 UTC" firstStartedPulling="2026-01-23 08:15:40.107342584 +0000 UTC m=+46.735392769" lastFinishedPulling="2026-01-23 08:17:15.424783692 +0000 UTC m=+142.052833897" observedRunningTime="2026-01-23 08:17:16.005679816 +0000 UTC m=+142.633730001" watchObservedRunningTime="2026-01-23 08:17:16.031579868 +0000 UTC m=+142.659630053" Jan 23 08:17:16 crc kubenswrapper[4860]: I0123 08:17:16.068524 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zrqw5" podStartSLOduration=3.749524207 podStartE2EDuration="1m39.068503234s" podCreationTimestamp="2026-01-23 08:15:37 +0000 UTC" firstStartedPulling="2026-01-23 08:15:40.099715962 +0000 UTC m=+46.727766147" lastFinishedPulling="2026-01-23 08:17:15.418694989 +0000 UTC m=+142.046745174" observedRunningTime="2026-01-23 08:17:16.048397827 +0000 UTC m=+142.676448012" watchObservedRunningTime="2026-01-23 08:17:16.068503234 +0000 UTC m=+142.696553419" Jan 23 08:17:16 crc kubenswrapper[4860]: I0123 08:17:16.069459 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dws9d" podStartSLOduration=3.811483048 podStartE2EDuration="1m39.06945065s" podCreationTimestamp="2026-01-23 08:15:37 +0000 UTC" firstStartedPulling="2026-01-23 08:15:40.090642324 +0000 UTC m=+46.718692509" lastFinishedPulling="2026-01-23 08:17:15.348609926 +0000 UTC m=+141.976660111" observedRunningTime="2026-01-23 08:17:16.061655372 +0000 UTC m=+142.689705557" watchObservedRunningTime="2026-01-23 08:17:16.06945065 +0000 UTC m=+142.697500835" Jan 23 08:17:16 crc kubenswrapper[4860]: I0123 08:17:16.088205 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wqsdd" podStartSLOduration=1.8976142120000001 podStartE2EDuration="1m36.088184821s" podCreationTimestamp="2026-01-23 08:15:40 +0000 UTC" firstStartedPulling="2026-01-23 08:15:41.147032372 +0000 UTC m=+47.775082557" lastFinishedPulling="2026-01-23 08:17:15.337602981 +0000 UTC m=+141.965653166" observedRunningTime="2026-01-23 08:17:16.086780704 +0000 UTC m=+142.714830889" watchObservedRunningTime="2026-01-23 08:17:16.088184821 +0000 UTC m=+142.716235016" Jan 23 08:17:16 crc kubenswrapper[4860]: I0123 08:17:16.976523 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcrzp" event={"ID":"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f","Type":"ContainerStarted","Data":"407868d6debf9ea1dbf24a35b1c660f4761f3d4e4567d89e45b4a2af9ab7f2f5"} Jan 23 08:17:16 crc kubenswrapper[4860]: I0123 08:17:16.995064 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tcrzp" podStartSLOduration=2.79120403 podStartE2EDuration="1m35.995040006s" podCreationTimestamp="2026-01-23 08:15:41 +0000 UTC" firstStartedPulling="2026-01-23 08:15:43.265646125 +0000 UTC m=+49.893696310" lastFinishedPulling="2026-01-23 08:17:16.469482101 +0000 UTC m=+143.097532286" observedRunningTime="2026-01-23 08:17:16.991876322 +0000 UTC m=+143.619926507" watchObservedRunningTime="2026-01-23 08:17:16.995040006 +0000 UTC m=+143.623090201" Jan 23 08:17:18 crc kubenswrapper[4860]: I0123 08:17:18.106309 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:17:18 crc kubenswrapper[4860]: I0123 08:17:18.106653 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:17:18 crc kubenswrapper[4860]: I0123 08:17:18.152863 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:17:18 crc kubenswrapper[4860]: I0123 08:17:18.310580 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:17:18 crc kubenswrapper[4860]: I0123 08:17:18.310638 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:17:18 crc kubenswrapper[4860]: I0123 08:17:18.356656 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:17:18 crc kubenswrapper[4860]: I0123 08:17:18.519962 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:17:18 crc kubenswrapper[4860]: I0123 08:17:18.520054 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:17:18 crc kubenswrapper[4860]: I0123 08:17:18.578571 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:17:19 crc kubenswrapper[4860]: I0123 08:17:19.251086 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8khjj"] Jan 23 08:17:20 crc kubenswrapper[4860]: I0123 08:17:20.087556 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:17:20 crc kubenswrapper[4860]: I0123 08:17:20.087619 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:17:20 crc kubenswrapper[4860]: I0123 08:17:20.137328 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:17:20 crc kubenswrapper[4860]: I0123 08:17:20.520744 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:17:20 crc kubenswrapper[4860]: I0123 08:17:20.521956 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:17:20 crc kubenswrapper[4860]: I0123 08:17:20.571188 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:17:21 crc kubenswrapper[4860]: I0123 08:17:21.044331 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:17:21 crc kubenswrapper[4860]: I0123 08:17:21.051290 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:17:21 crc kubenswrapper[4860]: I0123 08:17:21.528939 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:17:21 crc kubenswrapper[4860]: I0123 08:17:21.528994 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:17:22 crc kubenswrapper[4860]: I0123 08:17:22.566003 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tcrzp" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" containerName="registry-server" probeResult="failure" output=< Jan 23 08:17:22 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 23 08:17:22 crc kubenswrapper[4860]: > Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.115340 4860 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 08:17:24 crc kubenswrapper[4860]: E0123 08:17:24.115823 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19be086-d9bd-41eb-8e5b-16f2e1d8003a" containerName="pruner" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.115834 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19be086-d9bd-41eb-8e5b-16f2e1d8003a" containerName="pruner" Jan 23 08:17:24 crc kubenswrapper[4860]: E0123 08:17:24.115853 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef7b17c-47aa-41ef-83b0-753aba1cac55" containerName="kube-multus-additional-cni-plugins" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.115859 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef7b17c-47aa-41ef-83b0-753aba1cac55" containerName="kube-multus-additional-cni-plugins" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.115957 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="fef7b17c-47aa-41ef-83b0-753aba1cac55" containerName="kube-multus-additional-cni-plugins" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.115968 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d19be086-d9bd-41eb-8e5b-16f2e1d8003a" containerName="pruner" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.116318 4860 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.116454 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.116715 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://20babb29aa25bb46e00019494de21d2df6dbe1ffbb0d4bd682a34a188e63db7b" gracePeriod=15 Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.116734 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://e2a2b7ac6ee5d1e06d3a489bf96c7a611c8e6774cea1778bf5835b61191edf3e" gracePeriod=15 Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.116776 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3d8fe777336b9346c6d9bdd4ec1405d6dcb20141bbc9f5b450a18b3f6b83978d" gracePeriod=15 Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.116741 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://8d6a2ff8a31950b72b3a7637f295e3f0f883eb4d7666a1eb40096a9e9f03404f" gracePeriod=15 Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.116734 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://3632adc0645fae9a326eb8f9bf054be81f66e83b412af10be9b772465135cdef" gracePeriod=15 Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.117166 4860 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 08:17:24 crc kubenswrapper[4860]: E0123 08:17:24.117388 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.117405 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 08:17:24 crc kubenswrapper[4860]: E0123 08:17:24.117426 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.117433 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 08:17:24 crc kubenswrapper[4860]: E0123 08:17:24.117440 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.117446 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 08:17:24 crc kubenswrapper[4860]: E0123 08:17:24.117456 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.117461 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 08:17:24 crc kubenswrapper[4860]: E0123 08:17:24.117471 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.117477 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 08:17:24 crc kubenswrapper[4860]: E0123 08:17:24.117485 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.117491 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 08:17:24 crc kubenswrapper[4860]: E0123 08:17:24.117500 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.117505 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.117601 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.117613 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.117623 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.117632 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.117639 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.117809 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.256715 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.256856 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.256953 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.256990 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.257011 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.257163 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.257190 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.257259 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.358659 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.358715 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.358740 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.358782 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.358802 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.358823 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.358842 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.358861 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.358874 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.358865 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.358904 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.358918 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.358888 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.358946 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.359034 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:24 crc kubenswrapper[4860]: I0123 08:17:24.359131 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:26 crc kubenswrapper[4860]: E0123 08:17:26.423649 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod0d0b7153_b2e7_4920_96bf_422b58e8b3de.slice/crio-conmon-fdc0c1bc7658cf9f916b4f332e5e6c59486cf761666c340b0fa83480fbf34ac9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-20babb29aa25bb46e00019494de21d2df6dbe1ffbb0d4bd682a34a188e63db7b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-3d8fe777336b9346c6d9bdd4ec1405d6dcb20141bbc9f5b450a18b3f6b83978d.scope\": RecentStats: unable to find data in memory cache]" Jan 23 08:17:26 crc kubenswrapper[4860]: I0123 08:17:26.775936 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:17:26 crc kubenswrapper[4860]: I0123 08:17:26.776253 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:17:26 crc kubenswrapper[4860]: E0123 08:17:26.777968 4860 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": dial tcp 38.129.56.70:6443: connect: connection refused" event=< Jan 23 08:17:26 crc kubenswrapper[4860]: &Event{ObjectMeta:{machine-config-daemon-tk8df.188d4e3ba942ba44 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-tk8df,UID:081dccf3-546f-41d3-bd98-ce1b0bbe037e,APIVersion:v1,ResourceVersion:26590,FieldPath:spec.containers{machine-config-daemon},},Reason:ProbeError,Message:Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused Jan 23 08:17:26 crc kubenswrapper[4860]: body: Jan 23 08:17:26 crc kubenswrapper[4860]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:17:26.776232516 +0000 UTC m=+153.404282701,LastTimestamp:2026-01-23 08:17:26.776232516 +0000 UTC m=+153.404282701,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 23 08:17:26 crc kubenswrapper[4860]: > Jan 23 08:17:27 crc kubenswrapper[4860]: I0123 08:17:27.229678 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 08:17:27 crc kubenswrapper[4860]: I0123 08:17:27.230988 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 08:17:27 crc kubenswrapper[4860]: I0123 08:17:27.231990 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e2a2b7ac6ee5d1e06d3a489bf96c7a611c8e6774cea1778bf5835b61191edf3e" exitCode=0 Jan 23 08:17:27 crc kubenswrapper[4860]: I0123 08:17:27.232029 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8d6a2ff8a31950b72b3a7637f295e3f0f883eb4d7666a1eb40096a9e9f03404f" exitCode=0 Jan 23 08:17:27 crc kubenswrapper[4860]: I0123 08:17:27.232039 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3632adc0645fae9a326eb8f9bf054be81f66e83b412af10be9b772465135cdef" exitCode=0 Jan 23 08:17:27 crc kubenswrapper[4860]: I0123 08:17:27.232046 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3d8fe777336b9346c6d9bdd4ec1405d6dcb20141bbc9f5b450a18b3f6b83978d" exitCode=2 Jan 23 08:17:27 crc kubenswrapper[4860]: I0123 08:17:27.232060 4860 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="20babb29aa25bb46e00019494de21d2df6dbe1ffbb0d4bd682a34a188e63db7b" exitCode=0 Jan 23 08:17:27 crc kubenswrapper[4860]: I0123 08:17:27.232072 4860 scope.go:117] "RemoveContainer" containerID="751647bc9957e1cca376d924ef87f40dd0bb6be81aade5db1b6f50c13c33f7d2" Jan 23 08:17:27 crc kubenswrapper[4860]: I0123 08:17:27.233280 4860 generic.go:334] "Generic (PLEG): container finished" podID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" containerID="fdc0c1bc7658cf9f916b4f332e5e6c59486cf761666c340b0fa83480fbf34ac9" exitCode=0 Jan 23 08:17:27 crc kubenswrapper[4860]: I0123 08:17:27.233320 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0d0b7153-b2e7-4920-96bf-422b58e8b3de","Type":"ContainerDied","Data":"fdc0c1bc7658cf9f916b4f332e5e6c59486cf761666c340b0fa83480fbf34ac9"} Jan 23 08:17:27 crc kubenswrapper[4860]: I0123 08:17:27.239563 4860 status_manager.go:851] "Failed to get status for pod" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.148628 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.149532 4860 status_manager.go:851] "Failed to get status for pod" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.150225 4860 status_manager.go:851] "Failed to get status for pod" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" pod="openshift-marketplace/certified-operators-dws9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dws9d\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.361325 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.362008 4860 status_manager.go:851] "Failed to get status for pod" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.362341 4860 status_manager.go:851] "Failed to get status for pod" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" pod="openshift-marketplace/community-operators-zrqw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zrqw5\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.362736 4860 status_manager.go:851] "Failed to get status for pod" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" pod="openshift-marketplace/certified-operators-dws9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dws9d\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.486205 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.486793 4860 status_manager.go:851] "Failed to get status for pod" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" pod="openshift-marketplace/community-operators-zrqw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zrqw5\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.487305 4860 status_manager.go:851] "Failed to get status for pod" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" pod="openshift-marketplace/certified-operators-dws9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dws9d\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.487696 4860 status_manager.go:851] "Failed to get status for pod" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.556745 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.557871 4860 status_manager.go:851] "Failed to get status for pod" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.558308 4860 status_manager.go:851] "Failed to get status for pod" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" pod="openshift-marketplace/certified-operators-cs456" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cs456\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.558599 4860 status_manager.go:851] "Failed to get status for pod" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" pod="openshift-marketplace/community-operators-zrqw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zrqw5\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.559002 4860 status_manager.go:851] "Failed to get status for pod" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" pod="openshift-marketplace/certified-operators-dws9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dws9d\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.614919 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d0b7153-b2e7-4920-96bf-422b58e8b3de-kubelet-dir\") pod \"0d0b7153-b2e7-4920-96bf-422b58e8b3de\" (UID: \"0d0b7153-b2e7-4920-96bf-422b58e8b3de\") " Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.615010 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d0b7153-b2e7-4920-96bf-422b58e8b3de-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0d0b7153-b2e7-4920-96bf-422b58e8b3de" (UID: "0d0b7153-b2e7-4920-96bf-422b58e8b3de"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.615059 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d0b7153-b2e7-4920-96bf-422b58e8b3de-kube-api-access\") pod \"0d0b7153-b2e7-4920-96bf-422b58e8b3de\" (UID: \"0d0b7153-b2e7-4920-96bf-422b58e8b3de\") " Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.615110 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0d0b7153-b2e7-4920-96bf-422b58e8b3de-var-lock\") pod \"0d0b7153-b2e7-4920-96bf-422b58e8b3de\" (UID: \"0d0b7153-b2e7-4920-96bf-422b58e8b3de\") " Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.615173 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d0b7153-b2e7-4920-96bf-422b58e8b3de-var-lock" (OuterVolumeSpecName: "var-lock") pod "0d0b7153-b2e7-4920-96bf-422b58e8b3de" (UID: "0d0b7153-b2e7-4920-96bf-422b58e8b3de"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.615374 4860 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0d0b7153-b2e7-4920-96bf-422b58e8b3de-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.615390 4860 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d0b7153-b2e7-4920-96bf-422b58e8b3de-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.620595 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d0b7153-b2e7-4920-96bf-422b58e8b3de-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0d0b7153-b2e7-4920-96bf-422b58e8b3de" (UID: "0d0b7153-b2e7-4920-96bf-422b58e8b3de"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:17:28 crc kubenswrapper[4860]: I0123 08:17:28.716455 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d0b7153-b2e7-4920-96bf-422b58e8b3de-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:29 crc kubenswrapper[4860]: E0123 08:17:29.147566 4860 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.70:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:29 crc kubenswrapper[4860]: I0123 08:17:29.148088 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:29 crc kubenswrapper[4860]: W0123 08:17:29.169488 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-2ffc9d06dc654c95222a3130fb677679403f684479451c5e28a76a6acfd79721 WatchSource:0}: Error finding container 2ffc9d06dc654c95222a3130fb677679403f684479451c5e28a76a6acfd79721: Status 404 returned error can't find the container with id 2ffc9d06dc654c95222a3130fb677679403f684479451c5e28a76a6acfd79721 Jan 23 08:17:29 crc kubenswrapper[4860]: I0123 08:17:29.249233 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0d0b7153-b2e7-4920-96bf-422b58e8b3de","Type":"ContainerDied","Data":"f36c746bf3456f627eb09bef2b29986483d4735f14bda160fe2c0edaa86cd5b1"} Jan 23 08:17:29 crc kubenswrapper[4860]: I0123 08:17:29.249283 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f36c746bf3456f627eb09bef2b29986483d4735f14bda160fe2c0edaa86cd5b1" Jan 23 08:17:29 crc kubenswrapper[4860]: I0123 08:17:29.249259 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 08:17:29 crc kubenswrapper[4860]: I0123 08:17:29.250495 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2ffc9d06dc654c95222a3130fb677679403f684479451c5e28a76a6acfd79721"} Jan 23 08:17:29 crc kubenswrapper[4860]: I0123 08:17:29.270249 4860 status_manager.go:851] "Failed to get status for pod" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:29 crc kubenswrapper[4860]: I0123 08:17:29.270665 4860 status_manager.go:851] "Failed to get status for pod" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" pod="openshift-marketplace/certified-operators-cs456" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cs456\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:29 crc kubenswrapper[4860]: I0123 08:17:29.270955 4860 status_manager.go:851] "Failed to get status for pod" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" pod="openshift-marketplace/community-operators-zrqw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zrqw5\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:29 crc kubenswrapper[4860]: I0123 08:17:29.271334 4860 status_manager.go:851] "Failed to get status for pod" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" pod="openshift-marketplace/certified-operators-dws9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dws9d\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:31 crc kubenswrapper[4860]: I0123 08:17:31.592251 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:17:31 crc kubenswrapper[4860]: I0123 08:17:31.594413 4860 status_manager.go:851] "Failed to get status for pod" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" pod="openshift-marketplace/certified-operators-cs456" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cs456\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:31 crc kubenswrapper[4860]: I0123 08:17:31.595057 4860 status_manager.go:851] "Failed to get status for pod" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" pod="openshift-marketplace/redhat-operators-tcrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tcrzp\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:31 crc kubenswrapper[4860]: I0123 08:17:31.596012 4860 status_manager.go:851] "Failed to get status for pod" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:31 crc kubenswrapper[4860]: I0123 08:17:31.597141 4860 status_manager.go:851] "Failed to get status for pod" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" pod="openshift-marketplace/community-operators-zrqw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zrqw5\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:31 crc kubenswrapper[4860]: I0123 08:17:31.597747 4860 status_manager.go:851] "Failed to get status for pod" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" pod="openshift-marketplace/certified-operators-dws9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dws9d\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:31 crc kubenswrapper[4860]: I0123 08:17:31.627337 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:17:31 crc kubenswrapper[4860]: I0123 08:17:31.628146 4860 status_manager.go:851] "Failed to get status for pod" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" pod="openshift-marketplace/certified-operators-dws9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dws9d\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:31 crc kubenswrapper[4860]: I0123 08:17:31.628621 4860 status_manager.go:851] "Failed to get status for pod" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" pod="openshift-marketplace/certified-operators-cs456" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cs456\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:31 crc kubenswrapper[4860]: I0123 08:17:31.628933 4860 status_manager.go:851] "Failed to get status for pod" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" pod="openshift-marketplace/redhat-operators-tcrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tcrzp\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:31 crc kubenswrapper[4860]: I0123 08:17:31.629251 4860 status_manager.go:851] "Failed to get status for pod" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:31 crc kubenswrapper[4860]: I0123 08:17:31.629540 4860 status_manager.go:851] "Failed to get status for pod" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" pod="openshift-marketplace/community-operators-zrqw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zrqw5\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.093926 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.095649 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.096438 4860 status_manager.go:851] "Failed to get status for pod" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" pod="openshift-marketplace/certified-operators-cs456" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cs456\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.096626 4860 status_manager.go:851] "Failed to get status for pod" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" pod="openshift-marketplace/redhat-operators-tcrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tcrzp\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.096852 4860 status_manager.go:851] "Failed to get status for pod" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.097390 4860 status_manager.go:851] "Failed to get status for pod" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" pod="openshift-marketplace/community-operators-zrqw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zrqw5\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.097564 4860 status_manager.go:851] "Failed to get status for pod" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" pod="openshift-marketplace/certified-operators-dws9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dws9d\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.097755 4860 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: E0123 08:17:32.133118 4860 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: E0123 08:17:32.133566 4860 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: E0123 08:17:32.133828 4860 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: E0123 08:17:32.134069 4860 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: E0123 08:17:32.134344 4860 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.134385 4860 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 08:17:32 crc kubenswrapper[4860]: E0123 08:17:32.134599 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.70:6443: connect: connection refused" interval="200ms" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.163707 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.163896 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.163958 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.164237 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.164306 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.164305 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.265618 4860 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.265655 4860 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.265670 4860 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.268414 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"30f085d99dbcbce3f13ea6ee9b3c222c966d880668167f41d867d6fee596e1d0"} Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.271864 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.272871 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.272885 4860 scope.go:117] "RemoveContainer" containerID="e2a2b7ac6ee5d1e06d3a489bf96c7a611c8e6774cea1778bf5835b61191edf3e" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.286914 4860 scope.go:117] "RemoveContainer" containerID="8d6a2ff8a31950b72b3a7637f295e3f0f883eb4d7666a1eb40096a9e9f03404f" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.288890 4860 status_manager.go:851] "Failed to get status for pod" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" pod="openshift-marketplace/certified-operators-cs456" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cs456\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.289348 4860 status_manager.go:851] "Failed to get status for pod" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" pod="openshift-marketplace/redhat-operators-tcrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tcrzp\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.289625 4860 status_manager.go:851] "Failed to get status for pod" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.289932 4860 status_manager.go:851] "Failed to get status for pod" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" pod="openshift-marketplace/community-operators-zrqw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zrqw5\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.290245 4860 status_manager.go:851] "Failed to get status for pod" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" pod="openshift-marketplace/certified-operators-dws9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dws9d\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.290663 4860 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.300077 4860 scope.go:117] "RemoveContainer" containerID="3632adc0645fae9a326eb8f9bf054be81f66e83b412af10be9b772465135cdef" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.312366 4860 scope.go:117] "RemoveContainer" containerID="3d8fe777336b9346c6d9bdd4ec1405d6dcb20141bbc9f5b450a18b3f6b83978d" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.323981 4860 scope.go:117] "RemoveContainer" containerID="20babb29aa25bb46e00019494de21d2df6dbe1ffbb0d4bd682a34a188e63db7b" Jan 23 08:17:32 crc kubenswrapper[4860]: E0123 08:17:32.335332 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.70:6443: connect: connection refused" interval="400ms" Jan 23 08:17:32 crc kubenswrapper[4860]: I0123 08:17:32.338761 4860 scope.go:117] "RemoveContainer" containerID="337b50bc66ae3d8675c0fde1b6a5845a2075c9918ba76f1cb5b78d0b88950107" Jan 23 08:17:32 crc kubenswrapper[4860]: E0123 08:17:32.736552 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.70:6443: connect: connection refused" interval="800ms" Jan 23 08:17:33 crc kubenswrapper[4860]: I0123 08:17:33.280995 4860 status_manager.go:851] "Failed to get status for pod" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" pod="openshift-marketplace/community-operators-zrqw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zrqw5\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:33 crc kubenswrapper[4860]: E0123 08:17:33.281289 4860 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.70:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:17:33 crc kubenswrapper[4860]: I0123 08:17:33.281888 4860 status_manager.go:851] "Failed to get status for pod" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" pod="openshift-marketplace/certified-operators-dws9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dws9d\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:33 crc kubenswrapper[4860]: I0123 08:17:33.283353 4860 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:33 crc kubenswrapper[4860]: I0123 08:17:33.283951 4860 status_manager.go:851] "Failed to get status for pod" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" pod="openshift-marketplace/redhat-operators-tcrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tcrzp\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:33 crc kubenswrapper[4860]: I0123 08:17:33.284526 4860 status_manager.go:851] "Failed to get status for pod" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:33 crc kubenswrapper[4860]: I0123 08:17:33.285209 4860 status_manager.go:851] "Failed to get status for pod" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" pod="openshift-marketplace/certified-operators-cs456" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cs456\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:33 crc kubenswrapper[4860]: E0123 08:17:33.537734 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.70:6443: connect: connection refused" interval="1.6s" Jan 23 08:17:33 crc kubenswrapper[4860]: I0123 08:17:33.659766 4860 status_manager.go:851] "Failed to get status for pod" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" pod="openshift-marketplace/certified-operators-cs456" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cs456\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:33 crc kubenswrapper[4860]: I0123 08:17:33.660170 4860 status_manager.go:851] "Failed to get status for pod" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" pod="openshift-marketplace/redhat-operators-tcrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tcrzp\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:33 crc kubenswrapper[4860]: I0123 08:17:33.660497 4860 status_manager.go:851] "Failed to get status for pod" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:33 crc kubenswrapper[4860]: I0123 08:17:33.660806 4860 status_manager.go:851] "Failed to get status for pod" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" pod="openshift-marketplace/community-operators-zrqw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zrqw5\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:33 crc kubenswrapper[4860]: I0123 08:17:33.661176 4860 status_manager.go:851] "Failed to get status for pod" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" pod="openshift-marketplace/certified-operators-dws9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dws9d\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:33 crc kubenswrapper[4860]: I0123 08:17:33.661426 4860 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:33 crc kubenswrapper[4860]: I0123 08:17:33.673696 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 23 08:17:35 crc kubenswrapper[4860]: E0123 08:17:35.138938 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.70:6443: connect: connection refused" interval="3.2s" Jan 23 08:17:35 crc kubenswrapper[4860]: E0123 08:17:35.547375 4860 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": dial tcp 38.129.56.70:6443: connect: connection refused" event=< Jan 23 08:17:35 crc kubenswrapper[4860]: &Event{ObjectMeta:{machine-config-daemon-tk8df.188d4e3ba942ba44 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-tk8df,UID:081dccf3-546f-41d3-bd98-ce1b0bbe037e,APIVersion:v1,ResourceVersion:26590,FieldPath:spec.containers{machine-config-daemon},},Reason:ProbeError,Message:Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused Jan 23 08:17:35 crc kubenswrapper[4860]: body: Jan 23 08:17:35 crc kubenswrapper[4860]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:17:26.776232516 +0000 UTC m=+153.404282701,LastTimestamp:2026-01-23 08:17:26.776232516 +0000 UTC m=+153.404282701,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 23 08:17:35 crc kubenswrapper[4860]: > Jan 23 08:17:36 crc kubenswrapper[4860]: I0123 08:17:36.657777 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:36 crc kubenswrapper[4860]: I0123 08:17:36.658571 4860 status_manager.go:851] "Failed to get status for pod" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" pod="openshift-marketplace/certified-operators-cs456" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cs456\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:36 crc kubenswrapper[4860]: I0123 08:17:36.658929 4860 status_manager.go:851] "Failed to get status for pod" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" pod="openshift-marketplace/redhat-operators-tcrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tcrzp\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:36 crc kubenswrapper[4860]: I0123 08:17:36.659316 4860 status_manager.go:851] "Failed to get status for pod" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:36 crc kubenswrapper[4860]: I0123 08:17:36.659748 4860 status_manager.go:851] "Failed to get status for pod" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" pod="openshift-marketplace/community-operators-zrqw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zrqw5\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:36 crc kubenswrapper[4860]: I0123 08:17:36.660010 4860 status_manager.go:851] "Failed to get status for pod" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" pod="openshift-marketplace/certified-operators-dws9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dws9d\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:36 crc kubenswrapper[4860]: I0123 08:17:36.675587 4860 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9e4f3198-2504-4e7f-83bc-c050b3eee2f0" Jan 23 08:17:36 crc kubenswrapper[4860]: I0123 08:17:36.675615 4860 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9e4f3198-2504-4e7f-83bc-c050b3eee2f0" Jan 23 08:17:36 crc kubenswrapper[4860]: E0123 08:17:36.683976 4860 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:36 crc kubenswrapper[4860]: I0123 08:17:36.685066 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:36 crc kubenswrapper[4860]: W0123 08:17:36.704749 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-77b97d1063236bf0dbdd7413c2237beb1f7ec80460a878507803e02ccde4c829 WatchSource:0}: Error finding container 77b97d1063236bf0dbdd7413c2237beb1f7ec80460a878507803e02ccde4c829: Status 404 returned error can't find the container with id 77b97d1063236bf0dbdd7413c2237beb1f7ec80460a878507803e02ccde4c829 Jan 23 08:17:37 crc kubenswrapper[4860]: I0123 08:17:37.306046 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"77b97d1063236bf0dbdd7413c2237beb1f7ec80460a878507803e02ccde4c829"} Jan 23 08:17:38 crc kubenswrapper[4860]: I0123 08:17:38.314485 4860 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="0c4ac8229d1224eeaaadd92f66da5051ee183ee2c33ae8f70a8cacf5571d75be" exitCode=0 Jan 23 08:17:38 crc kubenswrapper[4860]: I0123 08:17:38.314544 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"0c4ac8229d1224eeaaadd92f66da5051ee183ee2c33ae8f70a8cacf5571d75be"} Jan 23 08:17:38 crc kubenswrapper[4860]: I0123 08:17:38.314865 4860 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9e4f3198-2504-4e7f-83bc-c050b3eee2f0" Jan 23 08:17:38 crc kubenswrapper[4860]: I0123 08:17:38.314889 4860 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9e4f3198-2504-4e7f-83bc-c050b3eee2f0" Jan 23 08:17:38 crc kubenswrapper[4860]: I0123 08:17:38.316530 4860 status_manager.go:851] "Failed to get status for pod" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" pod="openshift-marketplace/certified-operators-dws9d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dws9d\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:38 crc kubenswrapper[4860]: E0123 08:17:38.316568 4860 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:38 crc kubenswrapper[4860]: I0123 08:17:38.316990 4860 status_manager.go:851] "Failed to get status for pod" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" pod="openshift-marketplace/certified-operators-cs456" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-cs456\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:38 crc kubenswrapper[4860]: I0123 08:17:38.317362 4860 status_manager.go:851] "Failed to get status for pod" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" pod="openshift-marketplace/redhat-operators-tcrzp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tcrzp\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:38 crc kubenswrapper[4860]: I0123 08:17:38.317688 4860 status_manager.go:851] "Failed to get status for pod" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:38 crc kubenswrapper[4860]: I0123 08:17:38.318072 4860 status_manager.go:851] "Failed to get status for pod" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" pod="openshift-marketplace/community-operators-zrqw5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-zrqw5\": dial tcp 38.129.56.70:6443: connect: connection refused" Jan 23 08:17:38 crc kubenswrapper[4860]: E0123 08:17:38.339964 4860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.70:6443: connect: connection refused" interval="6.4s" Jan 23 08:17:39 crc kubenswrapper[4860]: I0123 08:17:39.325046 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"36b4a7da89c9451fa60f4e019c1a316e71d21632075a57907a1db4d482454cea"} Jan 23 08:17:39 crc kubenswrapper[4860]: I0123 08:17:39.325635 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5cbab2678d6841cf8b0ff8b9444011d43d31974ee9dc970cbc50d56bfa184c9b"} Jan 23 08:17:39 crc kubenswrapper[4860]: I0123 08:17:39.325650 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f1519ab36c35930d75fb9fdddf83463218d56d186ca5fb496196c964fd0bc135"} Jan 23 08:17:39 crc kubenswrapper[4860]: I0123 08:17:39.325660 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"726fbf163dce98a406c64457742153e543f17d241d89d84049cb5059d393c212"} Jan 23 08:17:39 crc kubenswrapper[4860]: I0123 08:17:39.328493 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 08:17:39 crc kubenswrapper[4860]: I0123 08:17:39.328537 4860 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6bab43a399d0257c60af69194f91cc43ac9e523076d37022a4a9342963ef171e" exitCode=1 Jan 23 08:17:39 crc kubenswrapper[4860]: I0123 08:17:39.328566 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6bab43a399d0257c60af69194f91cc43ac9e523076d37022a4a9342963ef171e"} Jan 23 08:17:39 crc kubenswrapper[4860]: I0123 08:17:39.328962 4860 scope.go:117] "RemoveContainer" containerID="6bab43a399d0257c60af69194f91cc43ac9e523076d37022a4a9342963ef171e" Jan 23 08:17:40 crc kubenswrapper[4860]: I0123 08:17:40.336437 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9037d9ee8e965d938ff3f34b9b65ec9c7874c3d3b1f6865057964820de4f6f54"} Jan 23 08:17:40 crc kubenswrapper[4860]: I0123 08:17:40.336628 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:40 crc kubenswrapper[4860]: I0123 08:17:40.336759 4860 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9e4f3198-2504-4e7f-83bc-c050b3eee2f0" Jan 23 08:17:40 crc kubenswrapper[4860]: I0123 08:17:40.336783 4860 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9e4f3198-2504-4e7f-83bc-c050b3eee2f0" Jan 23 08:17:40 crc kubenswrapper[4860]: I0123 08:17:40.341515 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 08:17:40 crc kubenswrapper[4860]: I0123 08:17:40.341557 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"62c4c763aae7c28f67684a79dd7db483736744f21d7325d877d0c6d25c8871e5"} Jan 23 08:17:40 crc kubenswrapper[4860]: I0123 08:17:40.978697 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:17:40 crc kubenswrapper[4860]: I0123 08:17:40.988528 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:17:41 crc kubenswrapper[4860]: I0123 08:17:41.347777 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:17:41 crc kubenswrapper[4860]: I0123 08:17:41.685613 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:41 crc kubenswrapper[4860]: I0123 08:17:41.686226 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:41 crc kubenswrapper[4860]: I0123 08:17:41.691832 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:44 crc kubenswrapper[4860]: I0123 08:17:44.283355 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" podUID="3ad4c125-6200-4d28-aeb0-8a0390508c91" containerName="oauth-openshift" containerID="cri-o://7e22b71e54d38dcb1540f64b14ab920600d8cbc3143d60e39016e319066aa32b" gracePeriod=15 Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.345988 4860 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.369226 4860 generic.go:334] "Generic (PLEG): container finished" podID="3ad4c125-6200-4d28-aeb0-8a0390508c91" containerID="7e22b71e54d38dcb1540f64b14ab920600d8cbc3143d60e39016e319066aa32b" exitCode=0 Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.369419 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" event={"ID":"3ad4c125-6200-4d28-aeb0-8a0390508c91","Type":"ContainerDied","Data":"7e22b71e54d38dcb1540f64b14ab920600d8cbc3143d60e39016e319066aa32b"} Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.369565 4860 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9e4f3198-2504-4e7f-83bc-c050b3eee2f0" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.369577 4860 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9e4f3198-2504-4e7f-83bc-c050b3eee2f0" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.373570 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.375844 4860 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="25e53f4d-5b53-4e56-96d0-abe3f50a9e06" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.819668 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.986445 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-audit-policies\") pod \"3ad4c125-6200-4d28-aeb0-8a0390508c91\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.986507 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-login\") pod \"3ad4c125-6200-4d28-aeb0-8a0390508c91\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.986537 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-session\") pod \"3ad4c125-6200-4d28-aeb0-8a0390508c91\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.986564 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-cliconfig\") pod \"3ad4c125-6200-4d28-aeb0-8a0390508c91\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.986586 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-provider-selection\") pod \"3ad4c125-6200-4d28-aeb0-8a0390508c91\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.986603 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-service-ca\") pod \"3ad4c125-6200-4d28-aeb0-8a0390508c91\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.986625 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-idp-0-file-data\") pod \"3ad4c125-6200-4d28-aeb0-8a0390508c91\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.986646 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-ocp-branding-template\") pod \"3ad4c125-6200-4d28-aeb0-8a0390508c91\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.986683 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-router-certs\") pod \"3ad4c125-6200-4d28-aeb0-8a0390508c91\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.986711 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-trusted-ca-bundle\") pod \"3ad4c125-6200-4d28-aeb0-8a0390508c91\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.986734 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ad4c125-6200-4d28-aeb0-8a0390508c91-audit-dir\") pod \"3ad4c125-6200-4d28-aeb0-8a0390508c91\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.986763 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-serving-cert\") pod \"3ad4c125-6200-4d28-aeb0-8a0390508c91\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.986779 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dww6l\" (UniqueName: \"kubernetes.io/projected/3ad4c125-6200-4d28-aeb0-8a0390508c91-kube-api-access-dww6l\") pod \"3ad4c125-6200-4d28-aeb0-8a0390508c91\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.986802 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-error\") pod \"3ad4c125-6200-4d28-aeb0-8a0390508c91\" (UID: \"3ad4c125-6200-4d28-aeb0-8a0390508c91\") " Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.987275 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "3ad4c125-6200-4d28-aeb0-8a0390508c91" (UID: "3ad4c125-6200-4d28-aeb0-8a0390508c91"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.987410 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ad4c125-6200-4d28-aeb0-8a0390508c91-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3ad4c125-6200-4d28-aeb0-8a0390508c91" (UID: "3ad4c125-6200-4d28-aeb0-8a0390508c91"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.987644 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "3ad4c125-6200-4d28-aeb0-8a0390508c91" (UID: "3ad4c125-6200-4d28-aeb0-8a0390508c91"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.988271 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "3ad4c125-6200-4d28-aeb0-8a0390508c91" (UID: "3ad4c125-6200-4d28-aeb0-8a0390508c91"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.988568 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "3ad4c125-6200-4d28-aeb0-8a0390508c91" (UID: "3ad4c125-6200-4d28-aeb0-8a0390508c91"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.993044 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "3ad4c125-6200-4d28-aeb0-8a0390508c91" (UID: "3ad4c125-6200-4d28-aeb0-8a0390508c91"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.993485 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "3ad4c125-6200-4d28-aeb0-8a0390508c91" (UID: "3ad4c125-6200-4d28-aeb0-8a0390508c91"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.993669 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "3ad4c125-6200-4d28-aeb0-8a0390508c91" (UID: "3ad4c125-6200-4d28-aeb0-8a0390508c91"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.994173 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "3ad4c125-6200-4d28-aeb0-8a0390508c91" (UID: "3ad4c125-6200-4d28-aeb0-8a0390508c91"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.994606 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "3ad4c125-6200-4d28-aeb0-8a0390508c91" (UID: "3ad4c125-6200-4d28-aeb0-8a0390508c91"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.995045 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "3ad4c125-6200-4d28-aeb0-8a0390508c91" (UID: "3ad4c125-6200-4d28-aeb0-8a0390508c91"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.995310 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad4c125-6200-4d28-aeb0-8a0390508c91-kube-api-access-dww6l" (OuterVolumeSpecName: "kube-api-access-dww6l") pod "3ad4c125-6200-4d28-aeb0-8a0390508c91" (UID: "3ad4c125-6200-4d28-aeb0-8a0390508c91"). InnerVolumeSpecName "kube-api-access-dww6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.995427 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "3ad4c125-6200-4d28-aeb0-8a0390508c91" (UID: "3ad4c125-6200-4d28-aeb0-8a0390508c91"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:45 crc kubenswrapper[4860]: I0123 08:17:45.996342 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "3ad4c125-6200-4d28-aeb0-8a0390508c91" (UID: "3ad4c125-6200-4d28-aeb0-8a0390508c91"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.088578 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.088622 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dww6l\" (UniqueName: \"kubernetes.io/projected/3ad4c125-6200-4d28-aeb0-8a0390508c91-kube-api-access-dww6l\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.088637 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.088650 4860 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.088663 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.088676 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.088692 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.088704 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.088718 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.088731 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.088743 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.088755 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.088766 4860 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ad4c125-6200-4d28-aeb0-8a0390508c91-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.088778 4860 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ad4c125-6200-4d28-aeb0-8a0390508c91-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.378588 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" event={"ID":"3ad4c125-6200-4d28-aeb0-8a0390508c91","Type":"ContainerDied","Data":"796811bedb240be59f084cbdc538c465c6caa4ca2445432d39769631db7349fa"} Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.378688 4860 scope.go:117] "RemoveContainer" containerID="7e22b71e54d38dcb1540f64b14ab920600d8cbc3143d60e39016e319066aa32b" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.378609 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8khjj" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.378842 4860 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9e4f3198-2504-4e7f-83bc-c050b3eee2f0" Jan 23 08:17:46 crc kubenswrapper[4860]: I0123 08:17:46.378867 4860 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9e4f3198-2504-4e7f-83bc-c050b3eee2f0" Jan 23 08:17:53 crc kubenswrapper[4860]: I0123 08:17:53.700594 4860 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="25e53f4d-5b53-4e56-96d0-abe3f50a9e06" Jan 23 08:17:55 crc kubenswrapper[4860]: I0123 08:17:55.457553 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 08:17:56 crc kubenswrapper[4860]: I0123 08:17:56.262928 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 08:17:56 crc kubenswrapper[4860]: I0123 08:17:56.388163 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 08:17:56 crc kubenswrapper[4860]: I0123 08:17:56.388175 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 08:17:56 crc kubenswrapper[4860]: I0123 08:17:56.515076 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 08:17:56 crc kubenswrapper[4860]: I0123 08:17:56.690349 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 08:17:56 crc kubenswrapper[4860]: I0123 08:17:56.775506 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:17:56 crc kubenswrapper[4860]: I0123 08:17:56.775562 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:17:56 crc kubenswrapper[4860]: I0123 08:17:56.882789 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 08:17:56 crc kubenswrapper[4860]: I0123 08:17:56.890404 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 08:17:57 crc kubenswrapper[4860]: I0123 08:17:57.038949 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 08:17:57 crc kubenswrapper[4860]: I0123 08:17:57.269830 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 08:17:57 crc kubenswrapper[4860]: I0123 08:17:57.317783 4860 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 08:17:57 crc kubenswrapper[4860]: I0123 08:17:57.611073 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 08:17:57 crc kubenswrapper[4860]: I0123 08:17:57.885961 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 08:17:58 crc kubenswrapper[4860]: I0123 08:17:58.191206 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 08:17:58 crc kubenswrapper[4860]: I0123 08:17:58.269059 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 08:17:58 crc kubenswrapper[4860]: I0123 08:17:58.367561 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 08:17:58 crc kubenswrapper[4860]: I0123 08:17:58.444521 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 08:17:58 crc kubenswrapper[4860]: I0123 08:17:58.522071 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 08:17:58 crc kubenswrapper[4860]: I0123 08:17:58.547260 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 08:17:58 crc kubenswrapper[4860]: I0123 08:17:58.609485 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:17:58 crc kubenswrapper[4860]: I0123 08:17:58.678260 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 08:17:58 crc kubenswrapper[4860]: I0123 08:17:58.740465 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 08:17:58 crc kubenswrapper[4860]: I0123 08:17:58.746109 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 08:17:58 crc kubenswrapper[4860]: I0123 08:17:58.781598 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 08:17:58 crc kubenswrapper[4860]: I0123 08:17:58.883135 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 08:17:58 crc kubenswrapper[4860]: I0123 08:17:58.904285 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 08:17:58 crc kubenswrapper[4860]: I0123 08:17:58.916417 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 08:17:58 crc kubenswrapper[4860]: I0123 08:17:58.916550 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.050386 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.123968 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.155485 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.253375 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.266125 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.313116 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.344480 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.413439 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.582756 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.643892 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.719455 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.720056 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.741691 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.776752 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.844496 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.867362 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.867403 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 08:17:59 crc kubenswrapper[4860]: I0123 08:17:59.944883 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 08:18:00 crc kubenswrapper[4860]: I0123 08:18:00.001666 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 08:18:00 crc kubenswrapper[4860]: I0123 08:18:00.034629 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 08:18:00 crc kubenswrapper[4860]: I0123 08:18:00.062601 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 08:18:00 crc kubenswrapper[4860]: I0123 08:18:00.089036 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 08:18:00 crc kubenswrapper[4860]: I0123 08:18:00.173666 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 08:18:00 crc kubenswrapper[4860]: I0123 08:18:00.189908 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 08:18:00 crc kubenswrapper[4860]: I0123 08:18:00.496175 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 08:18:00 crc kubenswrapper[4860]: I0123 08:18:00.546896 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 08:18:00 crc kubenswrapper[4860]: I0123 08:18:00.579181 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 08:18:00 crc kubenswrapper[4860]: I0123 08:18:00.606570 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 08:18:00 crc kubenswrapper[4860]: I0123 08:18:00.868561 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 08:18:00 crc kubenswrapper[4860]: I0123 08:18:00.934509 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 08:18:00 crc kubenswrapper[4860]: I0123 08:18:00.935077 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 08:18:00 crc kubenswrapper[4860]: I0123 08:18:00.958090 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 08:18:00 crc kubenswrapper[4860]: I0123 08:18:00.959540 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.045135 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.100678 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.105526 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.127149 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.154825 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.156314 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.273930 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.278381 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.474124 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.493792 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.495101 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.532782 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.536344 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.570416 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.650102 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.759391 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.766924 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.775601 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.778986 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.843382 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.844584 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.855534 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.865623 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.871754 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.906508 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 08:18:01 crc kubenswrapper[4860]: I0123 08:18:01.980700 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 08:18:02 crc kubenswrapper[4860]: I0123 08:18:02.023215 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 08:18:02 crc kubenswrapper[4860]: I0123 08:18:02.035963 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 08:18:02 crc kubenswrapper[4860]: I0123 08:18:02.113568 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 08:18:02 crc kubenswrapper[4860]: I0123 08:18:02.207949 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 08:18:02 crc kubenswrapper[4860]: I0123 08:18:02.673238 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 08:18:02 crc kubenswrapper[4860]: I0123 08:18:02.751532 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 08:18:02 crc kubenswrapper[4860]: I0123 08:18:02.812365 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 08:18:02 crc kubenswrapper[4860]: I0123 08:18:02.858700 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 08:18:02 crc kubenswrapper[4860]: I0123 08:18:02.868444 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 08:18:02 crc kubenswrapper[4860]: I0123 08:18:02.923638 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 08:18:02 crc kubenswrapper[4860]: I0123 08:18:02.925585 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.083423 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.168878 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.185890 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.189870 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.197543 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.242905 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.359248 4860 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.452360 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.469083 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.503608 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.528812 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.598425 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.606697 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.647350 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.802171 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.822658 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.897079 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 08:18:03 crc kubenswrapper[4860]: I0123 08:18:03.942326 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.115753 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.130179 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.161757 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.161916 4860 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.213640 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.265318 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.378759 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.498415 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.558466 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.642973 4860 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.650374 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-8khjj"] Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.650445 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.656309 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.659574 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.672923 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.673628 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.676869 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.676804213 podStartE2EDuration="19.676804213s" podCreationTimestamp="2026-01-23 08:17:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:18:04.669301583 +0000 UTC m=+191.297351788" watchObservedRunningTime="2026-01-23 08:18:04.676804213 +0000 UTC m=+191.304854438" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.782254 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.795000 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.823896 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.874143 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.899401 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.955249 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 08:18:04 crc kubenswrapper[4860]: I0123 08:18:04.965164 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.031826 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.049792 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.058476 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.090212 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.152628 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.225284 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.227484 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.330148 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.372593 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.386099 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.553248 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.573139 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.595974 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.596662 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.598075 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.630086 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.650461 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.663490 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.666143 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ad4c125-6200-4d28-aeb0-8a0390508c91" path="/var/lib/kubelet/pods/3ad4c125-6200-4d28-aeb0-8a0390508c91/volumes" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.729131 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.853577 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.880266 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.916171 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.952721 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.958999 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 08:18:05 crc kubenswrapper[4860]: I0123 08:18:05.980696 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.049723 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.119304 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.166999 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.197291 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.238429 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.260869 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.303801 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.368075 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.418880 4860 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.459735 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.466649 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.514997 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.627468 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.630554 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.755949 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.783631 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.811140 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.932986 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.954851 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.981551 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 08:18:06 crc kubenswrapper[4860]: I0123 08:18:06.988933 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.045540 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.295608 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.369533 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.490122 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.517494 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.649379 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.658909 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.742611 4860 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.808283 4860 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.808514 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://30f085d99dbcbce3f13ea6ee9b3c222c966d880668167f41d867d6fee596e1d0" gracePeriod=5 Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.888142 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-848ffdc94b-q42xj"] Jan 23 08:18:07 crc kubenswrapper[4860]: E0123 08:18:07.888330 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad4c125-6200-4d28-aeb0-8a0390508c91" containerName="oauth-openshift" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.888342 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad4c125-6200-4d28-aeb0-8a0390508c91" containerName="oauth-openshift" Jan 23 08:18:07 crc kubenswrapper[4860]: E0123 08:18:07.888358 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.888364 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 08:18:07 crc kubenswrapper[4860]: E0123 08:18:07.888377 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" containerName="installer" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.888383 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" containerName="installer" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.888475 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad4c125-6200-4d28-aeb0-8a0390508c91" containerName="oauth-openshift" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.888490 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.888499 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d0b7153-b2e7-4920-96bf-422b58e8b3de" containerName="installer" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.888843 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.890898 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.891324 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.892712 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.892941 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.893204 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.893896 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.894140 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.894185 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.894253 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.894274 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.894946 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.895148 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.901801 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.904471 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.906855 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.908896 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-848ffdc94b-q42xj"] Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.911628 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.975663 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-audit-policies\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.975713 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.975736 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-audit-dir\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.975753 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-user-template-error\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.975773 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.975810 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.975838 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9h9j\" (UniqueName: \"kubernetes.io/projected/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-kube-api-access-q9h9j\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.975864 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-service-ca\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.975891 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-session\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.975925 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.975951 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-user-template-login\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.975973 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.975996 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-router-certs\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:07 crc kubenswrapper[4860]: I0123 08:18:07.976037 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.078249 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-session\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.079248 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.079355 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-user-template-login\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.079469 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.079578 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-router-certs\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.079658 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.079746 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-audit-policies\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.079842 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.079916 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-audit-dir\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.079994 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-user-template-error\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.080103 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.080198 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.080275 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9h9j\" (UniqueName: \"kubernetes.io/projected/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-kube-api-access-q9h9j\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.080348 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-service-ca\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.080947 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-service-ca\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.081716 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-audit-dir\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.083053 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-audit-policies\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.083784 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.083887 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.085892 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-user-template-error\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.086809 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.088941 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.089310 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-user-template-login\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.090348 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-router-certs\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.090900 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.101753 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9h9j\" (UniqueName: \"kubernetes.io/projected/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-kube-api-access-q9h9j\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.102803 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-session\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.106100 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48b513ce-1d7f-46e2-b6d6-1e359e53b3bb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-848ffdc94b-q42xj\" (UID: \"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb\") " pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.167544 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.190676 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.191802 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.208487 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.274037 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.333883 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.347185 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.425464 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.503911 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.537412 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.565407 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.620388 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.709926 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.734098 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.758621 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.797811 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-848ffdc94b-q42xj"] Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.901134 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 08:18:08 crc kubenswrapper[4860]: I0123 08:18:08.951191 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.030705 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.324505 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.335579 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.436673 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.492323 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.531757 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" event={"ID":"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb","Type":"ContainerStarted","Data":"ff717e57f17975f2123b862adf2cc18898253c8e8b41071717304c670ebfee1e"} Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.531817 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" event={"ID":"48b513ce-1d7f-46e2-b6d6-1e359e53b3bb","Type":"ContainerStarted","Data":"20f47acbdf544d6f4eadf282573afd7878eb53b50869613fdc20c2414031e565"} Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.533146 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.534308 4860 patch_prober.go:28] interesting pod/oauth-openshift-848ffdc94b-q42xj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": dial tcp 10.217.0.57:6443: connect: connection refused" start-of-body= Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.534347 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" podUID="48b513ce-1d7f-46e2-b6d6-1e359e53b3bb" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": dial tcp 10.217.0.57:6443: connect: connection refused" Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.582116 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" podStartSLOduration=50.582099697 podStartE2EDuration="50.582099697s" podCreationTimestamp="2026-01-23 08:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:18:09.580787912 +0000 UTC m=+196.208838097" watchObservedRunningTime="2026-01-23 08:18:09.582099697 +0000 UTC m=+196.210149882" Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.598627 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.601738 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.608394 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.628465 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.731071 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.958797 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.974268 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 08:18:09 crc kubenswrapper[4860]: I0123 08:18:09.979462 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 08:18:10 crc kubenswrapper[4860]: I0123 08:18:10.138382 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 08:18:10 crc kubenswrapper[4860]: I0123 08:18:10.223282 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 08:18:10 crc kubenswrapper[4860]: I0123 08:18:10.247428 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 08:18:10 crc kubenswrapper[4860]: I0123 08:18:10.293212 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 08:18:10 crc kubenswrapper[4860]: I0123 08:18:10.315968 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 08:18:10 crc kubenswrapper[4860]: I0123 08:18:10.460756 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 08:18:10 crc kubenswrapper[4860]: I0123 08:18:10.529353 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 08:18:10 crc kubenswrapper[4860]: I0123 08:18:10.540749 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-848ffdc94b-q42xj" Jan 23 08:18:10 crc kubenswrapper[4860]: I0123 08:18:10.557713 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 08:18:10 crc kubenswrapper[4860]: I0123 08:18:10.608735 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 08:18:10 crc kubenswrapper[4860]: I0123 08:18:10.752224 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 08:18:10 crc kubenswrapper[4860]: I0123 08:18:10.845300 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 08:18:10 crc kubenswrapper[4860]: I0123 08:18:10.953791 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 08:18:11 crc kubenswrapper[4860]: I0123 08:18:11.271820 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 08:18:11 crc kubenswrapper[4860]: I0123 08:18:11.298739 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 08:18:11 crc kubenswrapper[4860]: I0123 08:18:11.722633 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 08:18:11 crc kubenswrapper[4860]: I0123 08:18:11.905150 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 08:18:12 crc kubenswrapper[4860]: I0123 08:18:12.093467 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.463634 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.463732 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.551141 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.551181 4860 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="30f085d99dbcbce3f13ea6ee9b3c222c966d880668167f41d867d6fee596e1d0" exitCode=137 Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.551221 4860 scope.go:117] "RemoveContainer" containerID="30f085d99dbcbce3f13ea6ee9b3c222c966d880668167f41d867d6fee596e1d0" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.551313 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.570977 4860 scope.go:117] "RemoveContainer" containerID="30f085d99dbcbce3f13ea6ee9b3c222c966d880668167f41d867d6fee596e1d0" Jan 23 08:18:13 crc kubenswrapper[4860]: E0123 08:18:13.571399 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30f085d99dbcbce3f13ea6ee9b3c222c966d880668167f41d867d6fee596e1d0\": container with ID starting with 30f085d99dbcbce3f13ea6ee9b3c222c966d880668167f41d867d6fee596e1d0 not found: ID does not exist" containerID="30f085d99dbcbce3f13ea6ee9b3c222c966d880668167f41d867d6fee596e1d0" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.571455 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30f085d99dbcbce3f13ea6ee9b3c222c966d880668167f41d867d6fee596e1d0"} err="failed to get container status \"30f085d99dbcbce3f13ea6ee9b3c222c966d880668167f41d867d6fee596e1d0\": rpc error: code = NotFound desc = could not find container \"30f085d99dbcbce3f13ea6ee9b3c222c966d880668167f41d867d6fee596e1d0\": container with ID starting with 30f085d99dbcbce3f13ea6ee9b3c222c966d880668167f41d867d6fee596e1d0 not found: ID does not exist" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.654991 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.655064 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.655126 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.655133 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.655174 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.655212 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.655229 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.655241 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.655297 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.655478 4860 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.655489 4860 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.655497 4860 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.655505 4860 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.663270 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.667535 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 23 08:18:13 crc kubenswrapper[4860]: I0123 08:18:13.756631 4860 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:14 crc kubenswrapper[4860]: I0123 08:18:14.115086 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.787153 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cs456"] Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.790746 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cs456" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" containerName="registry-server" containerID="cri-o://ac5a2f5dc3d2ac5e95d87c73e2bb3d34f086b1266774e1104699a53221b9ea43" gracePeriod=30 Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.800112 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dws9d"] Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.800424 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dws9d" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" containerName="registry-server" containerID="cri-o://364ebaa50deb592c5d46bea217544a97bf6544716746f6e97617c31f1e2dfcba" gracePeriod=30 Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.805898 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-44v8b"] Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.806218 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-44v8b" podUID="f2bd04fa-7e4e-4a22-9b93-418f22e296b2" containerName="registry-server" containerID="cri-o://ef0f28b7b39e59df503efc5e597ad78a7653dd9b92fcb767014264277d9756d1" gracePeriod=30 Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.819797 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zrqw5"] Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.820082 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zrqw5" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" containerName="registry-server" containerID="cri-o://a55416e3905948d1646c69581869f5aa698f3aba6d6c5097edf17075b9c0c2a2" gracePeriod=30 Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.832316 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k9j8h"] Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.832699 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" podUID="2661d5fe-77bd-44ac-a136-5362fae787f8" containerName="marketplace-operator" containerID="cri-o://a71de5a69c1c2ca17ed6fa1685b4edb056f2402413f0e9ace1cc4e23df54bb08" gracePeriod=30 Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.838687 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vlqk5"] Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.839089 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vlqk5" podUID="6938dfcf-ed32-45c3-a2a6-04f0b60315ac" containerName="registry-server" containerID="cri-o://8397f0e97db5c35fb66c694b5f0e7c327a2680a288de0937aea5a96e378158d4" gracePeriod=30 Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.844703 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wqsdd"] Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.845164 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wqsdd" podUID="92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" containerName="registry-server" containerID="cri-o://a8708a71e5af1e5399c3c0c1b9783feb9530b83aa8b9ef3c5a1439f97606b60d" gracePeriod=30 Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.848456 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2x4xz"] Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.848787 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2x4xz" podUID="13d8c162-6bfc-447f-b688-6f4e74687cd8" containerName="registry-server" containerID="cri-o://8edc210866a1bbbe3bea03d696219fbb1e659c61e8989ef9d0dba56d1d41d6c3" gracePeriod=30 Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.857662 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qrbrs"] Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.858578 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tcrzp"] Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.858803 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tcrzp" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" containerName="registry-server" containerID="cri-o://407868d6debf9ea1dbf24a35b1c660f4761f3d4e4567d89e45b4a2af9ab7f2f5" gracePeriod=30 Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.858951 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qrbrs" Jan 23 08:18:17 crc kubenswrapper[4860]: I0123 08:18:17.861624 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qrbrs"] Jan 23 08:18:17 crc kubenswrapper[4860]: E0123 08:18:17.901876 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ef0f28b7b39e59df503efc5e597ad78a7653dd9b92fcb767014264277d9756d1 is running failed: container process not found" containerID="ef0f28b7b39e59df503efc5e597ad78a7653dd9b92fcb767014264277d9756d1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 08:18:17 crc kubenswrapper[4860]: E0123 08:18:17.907989 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ef0f28b7b39e59df503efc5e597ad78a7653dd9b92fcb767014264277d9756d1 is running failed: container process not found" containerID="ef0f28b7b39e59df503efc5e597ad78a7653dd9b92fcb767014264277d9756d1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 08:18:17 crc kubenswrapper[4860]: E0123 08:18:17.908689 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ef0f28b7b39e59df503efc5e597ad78a7653dd9b92fcb767014264277d9756d1 is running failed: container process not found" containerID="ef0f28b7b39e59df503efc5e597ad78a7653dd9b92fcb767014264277d9756d1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 08:18:17 crc kubenswrapper[4860]: E0123 08:18:17.908732 4860 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ef0f28b7b39e59df503efc5e597ad78a7653dd9b92fcb767014264277d9756d1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-44v8b" podUID="f2bd04fa-7e4e-4a22-9b93-418f22e296b2" containerName="registry-server" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.011603 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25ce68cb-8937-4377-bdad-80b09dad889c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qrbrs\" (UID: \"25ce68cb-8937-4377-bdad-80b09dad889c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrbrs" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.011669 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25ce68cb-8937-4377-bdad-80b09dad889c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qrbrs\" (UID: \"25ce68cb-8937-4377-bdad-80b09dad889c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrbrs" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.011727 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r2q9\" (UniqueName: \"kubernetes.io/projected/25ce68cb-8937-4377-bdad-80b09dad889c-kube-api-access-8r2q9\") pod \"marketplace-operator-79b997595-qrbrs\" (UID: \"25ce68cb-8937-4377-bdad-80b09dad889c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrbrs" Jan 23 08:18:18 crc kubenswrapper[4860]: E0123 08:18:18.107141 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 364ebaa50deb592c5d46bea217544a97bf6544716746f6e97617c31f1e2dfcba is running failed: container process not found" containerID="364ebaa50deb592c5d46bea217544a97bf6544716746f6e97617c31f1e2dfcba" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 08:18:18 crc kubenswrapper[4860]: E0123 08:18:18.107839 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 364ebaa50deb592c5d46bea217544a97bf6544716746f6e97617c31f1e2dfcba is running failed: container process not found" containerID="364ebaa50deb592c5d46bea217544a97bf6544716746f6e97617c31f1e2dfcba" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 08:18:18 crc kubenswrapper[4860]: E0123 08:18:18.108325 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 364ebaa50deb592c5d46bea217544a97bf6544716746f6e97617c31f1e2dfcba is running failed: container process not found" containerID="364ebaa50deb592c5d46bea217544a97bf6544716746f6e97617c31f1e2dfcba" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 08:18:18 crc kubenswrapper[4860]: E0123 08:18:18.108414 4860 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 364ebaa50deb592c5d46bea217544a97bf6544716746f6e97617c31f1e2dfcba is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-dws9d" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" containerName="registry-server" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.112466 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r2q9\" (UniqueName: \"kubernetes.io/projected/25ce68cb-8937-4377-bdad-80b09dad889c-kube-api-access-8r2q9\") pod \"marketplace-operator-79b997595-qrbrs\" (UID: \"25ce68cb-8937-4377-bdad-80b09dad889c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrbrs" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.112938 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25ce68cb-8937-4377-bdad-80b09dad889c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qrbrs\" (UID: \"25ce68cb-8937-4377-bdad-80b09dad889c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrbrs" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.113776 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25ce68cb-8937-4377-bdad-80b09dad889c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qrbrs\" (UID: \"25ce68cb-8937-4377-bdad-80b09dad889c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrbrs" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.114136 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25ce68cb-8937-4377-bdad-80b09dad889c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qrbrs\" (UID: \"25ce68cb-8937-4377-bdad-80b09dad889c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrbrs" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.120939 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/25ce68cb-8937-4377-bdad-80b09dad889c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qrbrs\" (UID: \"25ce68cb-8937-4377-bdad-80b09dad889c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrbrs" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.128803 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r2q9\" (UniqueName: \"kubernetes.io/projected/25ce68cb-8937-4377-bdad-80b09dad889c-kube-api-access-8r2q9\") pod \"marketplace-operator-79b997595-qrbrs\" (UID: \"25ce68cb-8937-4377-bdad-80b09dad889c\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrbrs" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.175115 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qrbrs" Jan 23 08:18:18 crc kubenswrapper[4860]: E0123 08:18:18.311212 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a55416e3905948d1646c69581869f5aa698f3aba6d6c5097edf17075b9c0c2a2 is running failed: container process not found" containerID="a55416e3905948d1646c69581869f5aa698f3aba6d6c5097edf17075b9c0c2a2" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 08:18:18 crc kubenswrapper[4860]: E0123 08:18:18.312077 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a55416e3905948d1646c69581869f5aa698f3aba6d6c5097edf17075b9c0c2a2 is running failed: container process not found" containerID="a55416e3905948d1646c69581869f5aa698f3aba6d6c5097edf17075b9c0c2a2" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 08:18:18 crc kubenswrapper[4860]: E0123 08:18:18.312422 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a55416e3905948d1646c69581869f5aa698f3aba6d6c5097edf17075b9c0c2a2 is running failed: container process not found" containerID="a55416e3905948d1646c69581869f5aa698f3aba6d6c5097edf17075b9c0c2a2" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 08:18:18 crc kubenswrapper[4860]: E0123 08:18:18.312471 4860 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a55416e3905948d1646c69581869f5aa698f3aba6d6c5097edf17075b9c0c2a2 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-zrqw5" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" containerName="registry-server" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.385412 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qrbrs"] Jan 23 08:18:18 crc kubenswrapper[4860]: E0123 08:18:18.521507 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ac5a2f5dc3d2ac5e95d87c73e2bb3d34f086b1266774e1104699a53221b9ea43 is running failed: container process not found" containerID="ac5a2f5dc3d2ac5e95d87c73e2bb3d34f086b1266774e1104699a53221b9ea43" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 08:18:18 crc kubenswrapper[4860]: E0123 08:18:18.522378 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ac5a2f5dc3d2ac5e95d87c73e2bb3d34f086b1266774e1104699a53221b9ea43 is running failed: container process not found" containerID="ac5a2f5dc3d2ac5e95d87c73e2bb3d34f086b1266774e1104699a53221b9ea43" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 08:18:18 crc kubenswrapper[4860]: E0123 08:18:18.522792 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ac5a2f5dc3d2ac5e95d87c73e2bb3d34f086b1266774e1104699a53221b9ea43 is running failed: container process not found" containerID="ac5a2f5dc3d2ac5e95d87c73e2bb3d34f086b1266774e1104699a53221b9ea43" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 08:18:18 crc kubenswrapper[4860]: E0123 08:18:18.522814 4860 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ac5a2f5dc3d2ac5e95d87c73e2bb3d34f086b1266774e1104699a53221b9ea43 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-cs456" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" containerName="registry-server" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.586603 4860 generic.go:334] "Generic (PLEG): container finished" podID="92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" containerID="a8708a71e5af1e5399c3c0c1b9783feb9530b83aa8b9ef3c5a1439f97606b60d" exitCode=0 Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.586660 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqsdd" event={"ID":"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b","Type":"ContainerDied","Data":"a8708a71e5af1e5399c3c0c1b9783feb9530b83aa8b9ef3c5a1439f97606b60d"} Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.591264 4860 generic.go:334] "Generic (PLEG): container finished" podID="f2bd04fa-7e4e-4a22-9b93-418f22e296b2" containerID="ef0f28b7b39e59df503efc5e597ad78a7653dd9b92fcb767014264277d9756d1" exitCode=0 Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.591330 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44v8b" event={"ID":"f2bd04fa-7e4e-4a22-9b93-418f22e296b2","Type":"ContainerDied","Data":"ef0f28b7b39e59df503efc5e597ad78a7653dd9b92fcb767014264277d9756d1"} Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.594824 4860 generic.go:334] "Generic (PLEG): container finished" podID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" containerID="364ebaa50deb592c5d46bea217544a97bf6544716746f6e97617c31f1e2dfcba" exitCode=0 Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.594915 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dws9d" event={"ID":"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5","Type":"ContainerDied","Data":"364ebaa50deb592c5d46bea217544a97bf6544716746f6e97617c31f1e2dfcba"} Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.601916 4860 generic.go:334] "Generic (PLEG): container finished" podID="13d8c162-6bfc-447f-b688-6f4e74687cd8" containerID="8edc210866a1bbbe3bea03d696219fbb1e659c61e8989ef9d0dba56d1d41d6c3" exitCode=0 Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.601972 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x4xz" event={"ID":"13d8c162-6bfc-447f-b688-6f4e74687cd8","Type":"ContainerDied","Data":"8edc210866a1bbbe3bea03d696219fbb1e659c61e8989ef9d0dba56d1d41d6c3"} Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.605655 4860 generic.go:334] "Generic (PLEG): container finished" podID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" containerID="ac5a2f5dc3d2ac5e95d87c73e2bb3d34f086b1266774e1104699a53221b9ea43" exitCode=0 Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.605736 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cs456" event={"ID":"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b","Type":"ContainerDied","Data":"ac5a2f5dc3d2ac5e95d87c73e2bb3d34f086b1266774e1104699a53221b9ea43"} Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.607243 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qrbrs" event={"ID":"25ce68cb-8937-4377-bdad-80b09dad889c","Type":"ContainerStarted","Data":"df11ae64064cab563a89965f8ac6078cc839eabc69dbbf850864ad60e2dcb319"} Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.609520 4860 generic.go:334] "Generic (PLEG): container finished" podID="2661d5fe-77bd-44ac-a136-5362fae787f8" containerID="a71de5a69c1c2ca17ed6fa1685b4edb056f2402413f0e9ace1cc4e23df54bb08" exitCode=0 Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.609576 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" event={"ID":"2661d5fe-77bd-44ac-a136-5362fae787f8","Type":"ContainerDied","Data":"a71de5a69c1c2ca17ed6fa1685b4edb056f2402413f0e9ace1cc4e23df54bb08"} Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.611558 4860 generic.go:334] "Generic (PLEG): container finished" podID="6938dfcf-ed32-45c3-a2a6-04f0b60315ac" containerID="8397f0e97db5c35fb66c694b5f0e7c327a2680a288de0937aea5a96e378158d4" exitCode=0 Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.611611 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vlqk5" event={"ID":"6938dfcf-ed32-45c3-a2a6-04f0b60315ac","Type":"ContainerDied","Data":"8397f0e97db5c35fb66c694b5f0e7c327a2680a288de0937aea5a96e378158d4"} Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.615128 4860 generic.go:334] "Generic (PLEG): container finished" podID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" containerID="407868d6debf9ea1dbf24a35b1c660f4761f3d4e4567d89e45b4a2af9ab7f2f5" exitCode=0 Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.615190 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcrzp" event={"ID":"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f","Type":"ContainerDied","Data":"407868d6debf9ea1dbf24a35b1c660f4761f3d4e4567d89e45b4a2af9ab7f2f5"} Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.617891 4860 generic.go:334] "Generic (PLEG): container finished" podID="ce3742d9-24a7-4b15-9301-8d03596ae37b" containerID="a55416e3905948d1646c69581869f5aa698f3aba6d6c5097edf17075b9c0c2a2" exitCode=0 Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.617918 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqw5" event={"ID":"ce3742d9-24a7-4b15-9301-8d03596ae37b","Type":"ContainerDied","Data":"a55416e3905948d1646c69581869f5aa698f3aba6d6c5097edf17075b9c0c2a2"} Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.676929 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.717826 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.791745 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.825375 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-utilities\") pod \"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b\" (UID: \"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.825443 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-catalog-content\") pod \"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b\" (UID: \"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.825494 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnh4z\" (UniqueName: \"kubernetes.io/projected/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-kube-api-access-nnh4z\") pod \"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f\" (UID: \"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.825528 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5tz8\" (UniqueName: \"kubernetes.io/projected/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-kube-api-access-h5tz8\") pod \"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b\" (UID: \"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.825551 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2661d5fe-77bd-44ac-a136-5362fae787f8-marketplace-operator-metrics\") pod \"2661d5fe-77bd-44ac-a136-5362fae787f8\" (UID: \"2661d5fe-77bd-44ac-a136-5362fae787f8\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.825588 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-catalog-content\") pod \"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f\" (UID: \"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.825609 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-utilities\") pod \"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f\" (UID: \"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.825647 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2661d5fe-77bd-44ac-a136-5362fae787f8-marketplace-trusted-ca\") pod \"2661d5fe-77bd-44ac-a136-5362fae787f8\" (UID: \"2661d5fe-77bd-44ac-a136-5362fae787f8\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.825672 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r5vb\" (UniqueName: \"kubernetes.io/projected/2661d5fe-77bd-44ac-a136-5362fae787f8-kube-api-access-5r5vb\") pod \"2661d5fe-77bd-44ac-a136-5362fae787f8\" (UID: \"2661d5fe-77bd-44ac-a136-5362fae787f8\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.829531 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-utilities" (OuterVolumeSpecName: "utilities") pod "b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" (UID: "b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.830056 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2661d5fe-77bd-44ac-a136-5362fae787f8-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "2661d5fe-77bd-44ac-a136-5362fae787f8" (UID: "2661d5fe-77bd-44ac-a136-5362fae787f8"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.833959 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-utilities" (OuterVolumeSpecName: "utilities") pod "94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" (UID: "94cbcca6-2f65-40ab-ab9f-37ac19db1f4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.837555 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-kube-api-access-h5tz8" (OuterVolumeSpecName: "kube-api-access-h5tz8") pod "94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" (UID: "94cbcca6-2f65-40ab-ab9f-37ac19db1f4b"). InnerVolumeSpecName "kube-api-access-h5tz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.849530 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2661d5fe-77bd-44ac-a136-5362fae787f8-kube-api-access-5r5vb" (OuterVolumeSpecName: "kube-api-access-5r5vb") pod "2661d5fe-77bd-44ac-a136-5362fae787f8" (UID: "2661d5fe-77bd-44ac-a136-5362fae787f8"). InnerVolumeSpecName "kube-api-access-5r5vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.849622 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2661d5fe-77bd-44ac-a136-5362fae787f8-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "2661d5fe-77bd-44ac-a136-5362fae787f8" (UID: "2661d5fe-77bd-44ac-a136-5362fae787f8"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.851812 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-kube-api-access-nnh4z" (OuterVolumeSpecName: "kube-api-access-nnh4z") pod "b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" (UID: "b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f"). InnerVolumeSpecName "kube-api-access-nnh4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.868108 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.883352 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.886332 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.904849 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" (UID: "94cbcca6-2f65-40ab-ab9f-37ac19db1f4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.911937 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.926454 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-catalog-content\") pod \"6938dfcf-ed32-45c3-a2a6-04f0b60315ac\" (UID: \"6938dfcf-ed32-45c3-a2a6-04f0b60315ac\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.926520 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x42l2\" (UniqueName: \"kubernetes.io/projected/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-kube-api-access-x42l2\") pod \"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b\" (UID: \"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.926573 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-catalog-content\") pod \"f2bd04fa-7e4e-4a22-9b93-418f22e296b2\" (UID: \"f2bd04fa-7e4e-4a22-9b93-418f22e296b2\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.926611 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-utilities\") pod \"6938dfcf-ed32-45c3-a2a6-04f0b60315ac\" (UID: \"6938dfcf-ed32-45c3-a2a6-04f0b60315ac\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.926651 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcpfz\" (UniqueName: \"kubernetes.io/projected/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-kube-api-access-xcpfz\") pod \"6938dfcf-ed32-45c3-a2a6-04f0b60315ac\" (UID: \"6938dfcf-ed32-45c3-a2a6-04f0b60315ac\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.926698 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d8c162-6bfc-447f-b688-6f4e74687cd8-utilities\") pod \"13d8c162-6bfc-447f-b688-6f4e74687cd8\" (UID: \"13d8c162-6bfc-447f-b688-6f4e74687cd8\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.926732 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-utilities\") pod \"f2bd04fa-7e4e-4a22-9b93-418f22e296b2\" (UID: \"f2bd04fa-7e4e-4a22-9b93-418f22e296b2\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.926785 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-utilities\") pod \"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b\" (UID: \"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.926808 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2g29\" (UniqueName: \"kubernetes.io/projected/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-kube-api-access-h2g29\") pod \"f2bd04fa-7e4e-4a22-9b93-418f22e296b2\" (UID: \"f2bd04fa-7e4e-4a22-9b93-418f22e296b2\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.926892 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc69p\" (UniqueName: \"kubernetes.io/projected/13d8c162-6bfc-447f-b688-6f4e74687cd8-kube-api-access-bc69p\") pod \"13d8c162-6bfc-447f-b688-6f4e74687cd8\" (UID: \"13d8c162-6bfc-447f-b688-6f4e74687cd8\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.926921 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-catalog-content\") pod \"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b\" (UID: \"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.926943 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d8c162-6bfc-447f-b688-6f4e74687cd8-catalog-content\") pod \"13d8c162-6bfc-447f-b688-6f4e74687cd8\" (UID: \"13d8c162-6bfc-447f-b688-6f4e74687cd8\") " Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.927228 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r5vb\" (UniqueName: \"kubernetes.io/projected/2661d5fe-77bd-44ac-a136-5362fae787f8-kube-api-access-5r5vb\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.927252 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.927262 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.927274 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnh4z\" (UniqueName: \"kubernetes.io/projected/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-kube-api-access-nnh4z\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.927283 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5tz8\" (UniqueName: \"kubernetes.io/projected/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b-kube-api-access-h5tz8\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.927294 4860 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2661d5fe-77bd-44ac-a136-5362fae787f8-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.927304 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.927314 4860 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2661d5fe-77bd-44ac-a136-5362fae787f8-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.934053 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-utilities" (OuterVolumeSpecName: "utilities") pod "f2bd04fa-7e4e-4a22-9b93-418f22e296b2" (UID: "f2bd04fa-7e4e-4a22-9b93-418f22e296b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.940822 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-utilities" (OuterVolumeSpecName: "utilities") pod "92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" (UID: "92e9a494-d0d1-4bb9-8cc9-86e044e7a75b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.943576 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-kube-api-access-x42l2" (OuterVolumeSpecName: "kube-api-access-x42l2") pod "92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" (UID: "92e9a494-d0d1-4bb9-8cc9-86e044e7a75b"). InnerVolumeSpecName "kube-api-access-x42l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.944010 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-utilities" (OuterVolumeSpecName: "utilities") pod "6938dfcf-ed32-45c3-a2a6-04f0b60315ac" (UID: "6938dfcf-ed32-45c3-a2a6-04f0b60315ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.945443 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13d8c162-6bfc-447f-b688-6f4e74687cd8-utilities" (OuterVolumeSpecName: "utilities") pod "13d8c162-6bfc-447f-b688-6f4e74687cd8" (UID: "13d8c162-6bfc-447f-b688-6f4e74687cd8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.947806 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13d8c162-6bfc-447f-b688-6f4e74687cd8-kube-api-access-bc69p" (OuterVolumeSpecName: "kube-api-access-bc69p") pod "13d8c162-6bfc-447f-b688-6f4e74687cd8" (UID: "13d8c162-6bfc-447f-b688-6f4e74687cd8"). InnerVolumeSpecName "kube-api-access-bc69p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.948839 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-kube-api-access-xcpfz" (OuterVolumeSpecName: "kube-api-access-xcpfz") pod "6938dfcf-ed32-45c3-a2a6-04f0b60315ac" (UID: "6938dfcf-ed32-45c3-a2a6-04f0b60315ac"). InnerVolumeSpecName "kube-api-access-xcpfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.951105 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-kube-api-access-h2g29" (OuterVolumeSpecName: "kube-api-access-h2g29") pod "f2bd04fa-7e4e-4a22-9b93-418f22e296b2" (UID: "f2bd04fa-7e4e-4a22-9b93-418f22e296b2"). InnerVolumeSpecName "kube-api-access-h2g29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.972702 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6938dfcf-ed32-45c3-a2a6-04f0b60315ac" (UID: "6938dfcf-ed32-45c3-a2a6-04f0b60315ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4860]: I0123 08:18:18.991756 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" (UID: "92e9a494-d0d1-4bb9-8cc9-86e044e7a75b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.009742 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.019062 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028226 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2bd04fa-7e4e-4a22-9b93-418f22e296b2" (UID: "f2bd04fa-7e4e-4a22-9b93-418f22e296b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028286 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wqsdd"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028425 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3742d9-24a7-4b15-9301-8d03596ae37b-utilities\") pod \"ce3742d9-24a7-4b15-9301-8d03596ae37b\" (UID: \"ce3742d9-24a7-4b15-9301-8d03596ae37b\") " Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028463 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwz2t\" (UniqueName: \"kubernetes.io/projected/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-kube-api-access-pwz2t\") pod \"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5\" (UID: \"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5\") " Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028502 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-utilities\") pod \"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5\" (UID: \"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5\") " Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028560 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2gvf\" (UniqueName: \"kubernetes.io/projected/ce3742d9-24a7-4b15-9301-8d03596ae37b-kube-api-access-q2gvf\") pod \"ce3742d9-24a7-4b15-9301-8d03596ae37b\" (UID: \"ce3742d9-24a7-4b15-9301-8d03596ae37b\") " Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028609 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-catalog-content\") pod \"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5\" (UID: \"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5\") " Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028654 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3742d9-24a7-4b15-9301-8d03596ae37b-catalog-content\") pod \"ce3742d9-24a7-4b15-9301-8d03596ae37b\" (UID: \"ce3742d9-24a7-4b15-9301-8d03596ae37b\") " Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028856 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028881 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x42l2\" (UniqueName: \"kubernetes.io/projected/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-kube-api-access-x42l2\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028895 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028908 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028919 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcpfz\" (UniqueName: \"kubernetes.io/projected/6938dfcf-ed32-45c3-a2a6-04f0b60315ac-kube-api-access-xcpfz\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028931 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d8c162-6bfc-447f-b688-6f4e74687cd8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028942 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028954 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028965 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2g29\" (UniqueName: \"kubernetes.io/projected/f2bd04fa-7e4e-4a22-9b93-418f22e296b2-kube-api-access-h2g29\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028978 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc69p\" (UniqueName: \"kubernetes.io/projected/13d8c162-6bfc-447f-b688-6f4e74687cd8-kube-api-access-bc69p\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.028989 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.029618 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce3742d9-24a7-4b15-9301-8d03596ae37b-utilities" (OuterVolumeSpecName: "utilities") pod "ce3742d9-24a7-4b15-9301-8d03596ae37b" (UID: "ce3742d9-24a7-4b15-9301-8d03596ae37b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.030395 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-utilities" (OuterVolumeSpecName: "utilities") pod "f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" (UID: "f61b81b4-d438-4fd0-a8c0-4f609b4d37c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.033924 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" (UID: "b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.038998 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-kube-api-access-pwz2t" (OuterVolumeSpecName: "kube-api-access-pwz2t") pod "f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" (UID: "f61b81b4-d438-4fd0-a8c0-4f609b4d37c5"). InnerVolumeSpecName "kube-api-access-pwz2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.052650 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce3742d9-24a7-4b15-9301-8d03596ae37b-kube-api-access-q2gvf" (OuterVolumeSpecName: "kube-api-access-q2gvf") pod "ce3742d9-24a7-4b15-9301-8d03596ae37b" (UID: "ce3742d9-24a7-4b15-9301-8d03596ae37b"). InnerVolumeSpecName "kube-api-access-q2gvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.096316 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce3742d9-24a7-4b15-9301-8d03596ae37b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce3742d9-24a7-4b15-9301-8d03596ae37b" (UID: "ce3742d9-24a7-4b15-9301-8d03596ae37b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.109810 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" (UID: "f61b81b4-d438-4fd0-a8c0-4f609b4d37c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.129430 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3742d9-24a7-4b15-9301-8d03596ae37b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.129464 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3742d9-24a7-4b15-9301-8d03596ae37b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.129474 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwz2t\" (UniqueName: \"kubernetes.io/projected/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-kube-api-access-pwz2t\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.129485 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.129494 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.129502 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2gvf\" (UniqueName: \"kubernetes.io/projected/ce3742d9-24a7-4b15-9301-8d03596ae37b-kube-api-access-q2gvf\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.129510 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.149590 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13d8c162-6bfc-447f-b688-6f4e74687cd8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13d8c162-6bfc-447f-b688-6f4e74687cd8" (UID: "13d8c162-6bfc-447f-b688-6f4e74687cd8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.230446 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d8c162-6bfc-447f-b688-6f4e74687cd8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.627238 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qrbrs" event={"ID":"25ce68cb-8937-4377-bdad-80b09dad889c","Type":"ContainerStarted","Data":"108cd52e19f25c2f1bde40237782f13922111f6b7f115d6ecf6656bd3cea720c"} Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.629337 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qrbrs" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.632637 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qrbrs" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.633095 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqsdd" event={"ID":"92e9a494-d0d1-4bb9-8cc9-86e044e7a75b","Type":"ContainerDied","Data":"c2a038b0564fbe984aebb60d971a2190c42f410f8e1e00fe80b1a7d7dd556d09"} Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.633417 4860 scope.go:117] "RemoveContainer" containerID="a8708a71e5af1e5399c3c0c1b9783feb9530b83aa8b9ef3c5a1439f97606b60d" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.633213 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wqsdd" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.644632 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vlqk5" event={"ID":"6938dfcf-ed32-45c3-a2a6-04f0b60315ac","Type":"ContainerDied","Data":"40a1bf1857e2489755387482e560ec95d2023b8b0b6bed97867ae3ed3236acb4"} Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.644737 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vlqk5" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.650182 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44v8b" event={"ID":"f2bd04fa-7e4e-4a22-9b93-418f22e296b2","Type":"ContainerDied","Data":"55fac23b6701291d1d4d76cd940257734df46e41e8a05450c2edf1bfc94f8cdd"} Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.651106 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44v8b" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.653414 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qrbrs" podStartSLOduration=2.6533942010000002 podStartE2EDuration="2.653394201s" podCreationTimestamp="2026-01-23 08:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:18:19.650390726 +0000 UTC m=+206.278440931" watchObservedRunningTime="2026-01-23 08:18:19.653394201 +0000 UTC m=+206.281444386" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.656919 4860 scope.go:117] "RemoveContainer" containerID="8e4d7769a76f1642866b75757e7d700aa75c9dc6f51abc09a6e50840c6e8faa8" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.657342 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dws9d" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.659622 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2x4xz" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.671568 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cs456" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.673224 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.681091 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dws9d" event={"ID":"f61b81b4-d438-4fd0-a8c0-4f609b4d37c5","Type":"ContainerDied","Data":"1bb4ba557fdb359599fc2b5b2f359240d5ba6808086b8236a261e8e7163a88d9"} Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.681135 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x4xz" event={"ID":"13d8c162-6bfc-447f-b688-6f4e74687cd8","Type":"ContainerDied","Data":"748fa058f78cd7fed22e20b03c4886443bd3a93989a0bd235fed97321fd370ba"} Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.681151 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cs456" event={"ID":"94cbcca6-2f65-40ab-ab9f-37ac19db1f4b","Type":"ContainerDied","Data":"9b549e5464cfacbda0374f9688b7022c712b7d2b48a69fed62565bca66f936ca"} Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.681163 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k9j8h" event={"ID":"2661d5fe-77bd-44ac-a136-5362fae787f8","Type":"ContainerDied","Data":"662f4609e7210b4b7c9add8d9e3d540659b5e7ac06d4402007f94eb84076ad12"} Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.681174 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcrzp" event={"ID":"b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f","Type":"ContainerDied","Data":"1b3b048664ea216f02ff364eb6c6bfb2a6f30c39b1e74d4016eca986dffd89ec"} Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.681272 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tcrzp" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.702677 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrqw5" event={"ID":"ce3742d9-24a7-4b15-9301-8d03596ae37b","Type":"ContainerDied","Data":"5a1b2483986f272e4218a1250c1163cd19ef321167579e4beab2a3ddda80e70a"} Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.702783 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrqw5" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.705357 4860 scope.go:117] "RemoveContainer" containerID="79e583ef3f9e022b60543423e4680f00d0c13aa67caa354d4f2b91e31eea812b" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.718657 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wqsdd"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.724066 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wqsdd"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.771831 4860 scope.go:117] "RemoveContainer" containerID="8397f0e97db5c35fb66c694b5f0e7c327a2680a288de0937aea5a96e378158d4" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.786374 4860 scope.go:117] "RemoveContainer" containerID="42765b2438eb06f38cd6b310bcd0d9c693675ad16eb0c9ead5ce252e7c671704" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.798108 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2x4xz"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.803367 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2x4xz"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.814742 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dws9d"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.817240 4860 scope.go:117] "RemoveContainer" containerID="dd28a75d93dcce01b3124cff7f543d6bf285f23b193dc83cb804694ae1168249" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.819560 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dws9d"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.823492 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vlqk5"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.828137 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vlqk5"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.836783 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k9j8h"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.837133 4860 scope.go:117] "RemoveContainer" containerID="ef0f28b7b39e59df503efc5e597ad78a7653dd9b92fcb767014264277d9756d1" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.839406 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k9j8h"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.850826 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-44v8b"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.857084 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-44v8b"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.860244 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cs456"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.862838 4860 scope.go:117] "RemoveContainer" containerID="7676edafe78ba02e0ac92685523271df21b2402728ed8a7db80119d38d7c0fe7" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.863716 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cs456"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.876193 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tcrzp"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.883333 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tcrzp"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.886694 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zrqw5"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.889516 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zrqw5"] Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.890584 4860 scope.go:117] "RemoveContainer" containerID="c99c1540542ed48ce31c24069544813082555a3724400e7e58a46608ffbdc127" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.904623 4860 scope.go:117] "RemoveContainer" containerID="364ebaa50deb592c5d46bea217544a97bf6544716746f6e97617c31f1e2dfcba" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.928165 4860 scope.go:117] "RemoveContainer" containerID="cc2f29140d62d5463ce0408b62d2bc3f1cff06f0fa09497c54b4209b7cc48506" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.943473 4860 scope.go:117] "RemoveContainer" containerID="ba5a64ba5547861c6913ff7a7b780f463247e6c35ecd28b0ef9dbbfadd6b3f9a" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.957887 4860 scope.go:117] "RemoveContainer" containerID="8edc210866a1bbbe3bea03d696219fbb1e659c61e8989ef9d0dba56d1d41d6c3" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.981265 4860 scope.go:117] "RemoveContainer" containerID="ca415b17882df9ae68d9e12ce7e4dfaa0b765d2bc322d7bd4d2fa5cc9e694581" Jan 23 08:18:19 crc kubenswrapper[4860]: I0123 08:18:19.994890 4860 scope.go:117] "RemoveContainer" containerID="53843d42271c87a40e04ef3ab9487edb07464ede4443dc0301c69aee8009c81c" Jan 23 08:18:20 crc kubenswrapper[4860]: I0123 08:18:20.016870 4860 scope.go:117] "RemoveContainer" containerID="ac5a2f5dc3d2ac5e95d87c73e2bb3d34f086b1266774e1104699a53221b9ea43" Jan 23 08:18:20 crc kubenswrapper[4860]: I0123 08:18:20.035525 4860 scope.go:117] "RemoveContainer" containerID="6ae08939fb4c8f3975be36eeed5a5ce8ab73b15856a589d516f42c18307ff2f1" Jan 23 08:18:20 crc kubenswrapper[4860]: I0123 08:18:20.049186 4860 scope.go:117] "RemoveContainer" containerID="ecfda0ef9c96c72d8803eda616b9d88144f7b386325a40a771fa7e6fcb212187" Jan 23 08:18:20 crc kubenswrapper[4860]: I0123 08:18:20.060905 4860 scope.go:117] "RemoveContainer" containerID="a71de5a69c1c2ca17ed6fa1685b4edb056f2402413f0e9ace1cc4e23df54bb08" Jan 23 08:18:20 crc kubenswrapper[4860]: I0123 08:18:20.075807 4860 scope.go:117] "RemoveContainer" containerID="407868d6debf9ea1dbf24a35b1c660f4761f3d4e4567d89e45b4a2af9ab7f2f5" Jan 23 08:18:20 crc kubenswrapper[4860]: I0123 08:18:20.088851 4860 scope.go:117] "RemoveContainer" containerID="bb5c54edbf79b68b6ff550e0cb53c6c9e7c47f79fbb1e42e26b8c4e1ca6a6d28" Jan 23 08:18:20 crc kubenswrapper[4860]: I0123 08:18:20.102728 4860 scope.go:117] "RemoveContainer" containerID="c726714b2fd451d9502ceb048bf4825aa99d36bddd225566d9e8fb4fb03010d0" Jan 23 08:18:20 crc kubenswrapper[4860]: I0123 08:18:20.116042 4860 scope.go:117] "RemoveContainer" containerID="a55416e3905948d1646c69581869f5aa698f3aba6d6c5097edf17075b9c0c2a2" Jan 23 08:18:20 crc kubenswrapper[4860]: I0123 08:18:20.130428 4860 scope.go:117] "RemoveContainer" containerID="72d9248d7abbc8039db720be0bfa76519665348a70824aa7d322d39262e5ce27" Jan 23 08:18:20 crc kubenswrapper[4860]: I0123 08:18:20.143838 4860 scope.go:117] "RemoveContainer" containerID="9ff0eded88ec6381d008d0833991054ad5ae0df69aa4e9975f0f7d47c545441e" Jan 23 08:18:21 crc kubenswrapper[4860]: I0123 08:18:21.663923 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13d8c162-6bfc-447f-b688-6f4e74687cd8" path="/var/lib/kubelet/pods/13d8c162-6bfc-447f-b688-6f4e74687cd8/volumes" Jan 23 08:18:21 crc kubenswrapper[4860]: I0123 08:18:21.664580 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2661d5fe-77bd-44ac-a136-5362fae787f8" path="/var/lib/kubelet/pods/2661d5fe-77bd-44ac-a136-5362fae787f8/volumes" Jan 23 08:18:21 crc kubenswrapper[4860]: I0123 08:18:21.665010 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6938dfcf-ed32-45c3-a2a6-04f0b60315ac" path="/var/lib/kubelet/pods/6938dfcf-ed32-45c3-a2a6-04f0b60315ac/volumes" Jan 23 08:18:21 crc kubenswrapper[4860]: I0123 08:18:21.666065 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" path="/var/lib/kubelet/pods/92e9a494-d0d1-4bb9-8cc9-86e044e7a75b/volumes" Jan 23 08:18:21 crc kubenswrapper[4860]: I0123 08:18:21.666665 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" path="/var/lib/kubelet/pods/94cbcca6-2f65-40ab-ab9f-37ac19db1f4b/volumes" Jan 23 08:18:21 crc kubenswrapper[4860]: I0123 08:18:21.667819 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" path="/var/lib/kubelet/pods/b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f/volumes" Jan 23 08:18:21 crc kubenswrapper[4860]: I0123 08:18:21.668410 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" path="/var/lib/kubelet/pods/ce3742d9-24a7-4b15-9301-8d03596ae37b/volumes" Jan 23 08:18:21 crc kubenswrapper[4860]: I0123 08:18:21.668940 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2bd04fa-7e4e-4a22-9b93-418f22e296b2" path="/var/lib/kubelet/pods/f2bd04fa-7e4e-4a22-9b93-418f22e296b2/volumes" Jan 23 08:18:21 crc kubenswrapper[4860]: I0123 08:18:21.669871 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" path="/var/lib/kubelet/pods/f61b81b4-d438-4fd0-a8c0-4f609b4d37c5/volumes" Jan 23 08:18:26 crc kubenswrapper[4860]: I0123 08:18:26.776067 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:18:26 crc kubenswrapper[4860]: I0123 08:18:26.776514 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:18:26 crc kubenswrapper[4860]: I0123 08:18:26.776562 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:18:26 crc kubenswrapper[4860]: I0123 08:18:26.777162 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b3151f70a9f1405c480998d872f2f689c2d81b91f5b9582195941d5aad7277c"} pod="openshift-machine-config-operator/machine-config-daemon-tk8df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:18:26 crc kubenswrapper[4860]: I0123 08:18:26.777214 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" containerID="cri-o://0b3151f70a9f1405c480998d872f2f689c2d81b91f5b9582195941d5aad7277c" gracePeriod=600 Jan 23 08:18:27 crc kubenswrapper[4860]: I0123 08:18:27.752909 4860 generic.go:334] "Generic (PLEG): container finished" podID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerID="0b3151f70a9f1405c480998d872f2f689c2d81b91f5b9582195941d5aad7277c" exitCode=0 Jan 23 08:18:27 crc kubenswrapper[4860]: I0123 08:18:27.753458 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerDied","Data":"0b3151f70a9f1405c480998d872f2f689c2d81b91f5b9582195941d5aad7277c"} Jan 23 08:18:27 crc kubenswrapper[4860]: I0123 08:18:27.753488 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerStarted","Data":"789e53bf4116462d0d867afc4faead4f91efb1364fa83cabf4ea344608af1714"} Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.043373 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hdn58"] Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.044045 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" podUID="4a70f78c-30ad-42bb-a8d6-c7ef144db4f2" containerName="controller-manager" containerID="cri-o://add5c11796b070a656d2afb9e74285fba66aefae379a375aedc139ad9be4f8e6" gracePeriod=30 Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.118606 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv"] Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.118830 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" podUID="686eac92-c672-4c4d-bf80-8e47a557a52c" containerName="route-controller-manager" containerID="cri-o://5420340c9cb6bdacc3cb3c6b7e143326988b00f441e7ca6613332ba5e9205eb6" gracePeriod=30 Jan 23 08:18:37 crc kubenswrapper[4860]: E0123 08:18:37.315981 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod686eac92_c672_4c4d_bf80_8e47a557a52c.slice/crio-conmon-5420340c9cb6bdacc3cb3c6b7e143326988b00f441e7ca6613332ba5e9205eb6.scope\": RecentStats: unable to find data in memory cache]" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.367585 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.476226 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.560574 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-proxy-ca-bundles\") pod \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.560653 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-client-ca\") pod \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.560703 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq276\" (UniqueName: \"kubernetes.io/projected/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-kube-api-access-nq276\") pod \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.560719 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-config\") pod \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.560744 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-serving-cert\") pod \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\" (UID: \"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2\") " Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.561665 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-client-ca" (OuterVolumeSpecName: "client-ca") pod "4a70f78c-30ad-42bb-a8d6-c7ef144db4f2" (UID: "4a70f78c-30ad-42bb-a8d6-c7ef144db4f2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.561760 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4a70f78c-30ad-42bb-a8d6-c7ef144db4f2" (UID: "4a70f78c-30ad-42bb-a8d6-c7ef144db4f2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.561781 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-config" (OuterVolumeSpecName: "config") pod "4a70f78c-30ad-42bb-a8d6-c7ef144db4f2" (UID: "4a70f78c-30ad-42bb-a8d6-c7ef144db4f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.566061 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-kube-api-access-nq276" (OuterVolumeSpecName: "kube-api-access-nq276") pod "4a70f78c-30ad-42bb-a8d6-c7ef144db4f2" (UID: "4a70f78c-30ad-42bb-a8d6-c7ef144db4f2"). InnerVolumeSpecName "kube-api-access-nq276". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.566092 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4a70f78c-30ad-42bb-a8d6-c7ef144db4f2" (UID: "4a70f78c-30ad-42bb-a8d6-c7ef144db4f2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.661464 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vn8d\" (UniqueName: \"kubernetes.io/projected/686eac92-c672-4c4d-bf80-8e47a557a52c-kube-api-access-8vn8d\") pod \"686eac92-c672-4c4d-bf80-8e47a557a52c\" (UID: \"686eac92-c672-4c4d-bf80-8e47a557a52c\") " Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.661546 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686eac92-c672-4c4d-bf80-8e47a557a52c-client-ca\") pod \"686eac92-c672-4c4d-bf80-8e47a557a52c\" (UID: \"686eac92-c672-4c4d-bf80-8e47a557a52c\") " Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.661652 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686eac92-c672-4c4d-bf80-8e47a557a52c-config\") pod \"686eac92-c672-4c4d-bf80-8e47a557a52c\" (UID: \"686eac92-c672-4c4d-bf80-8e47a557a52c\") " Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.661698 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686eac92-c672-4c4d-bf80-8e47a557a52c-serving-cert\") pod \"686eac92-c672-4c4d-bf80-8e47a557a52c\" (UID: \"686eac92-c672-4c4d-bf80-8e47a557a52c\") " Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.661843 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.661855 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq276\" (UniqueName: \"kubernetes.io/projected/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-kube-api-access-nq276\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.661865 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.661875 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.661887 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.662279 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686eac92-c672-4c4d-bf80-8e47a557a52c-client-ca" (OuterVolumeSpecName: "client-ca") pod "686eac92-c672-4c4d-bf80-8e47a557a52c" (UID: "686eac92-c672-4c4d-bf80-8e47a557a52c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.662404 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686eac92-c672-4c4d-bf80-8e47a557a52c-config" (OuterVolumeSpecName: "config") pod "686eac92-c672-4c4d-bf80-8e47a557a52c" (UID: "686eac92-c672-4c4d-bf80-8e47a557a52c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.664930 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686eac92-c672-4c4d-bf80-8e47a557a52c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "686eac92-c672-4c4d-bf80-8e47a557a52c" (UID: "686eac92-c672-4c4d-bf80-8e47a557a52c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.664992 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/686eac92-c672-4c4d-bf80-8e47a557a52c-kube-api-access-8vn8d" (OuterVolumeSpecName: "kube-api-access-8vn8d") pod "686eac92-c672-4c4d-bf80-8e47a557a52c" (UID: "686eac92-c672-4c4d-bf80-8e47a557a52c"). InnerVolumeSpecName "kube-api-access-8vn8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.762646 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686eac92-c672-4c4d-bf80-8e47a557a52c-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.762680 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686eac92-c672-4c4d-bf80-8e47a557a52c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.762695 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vn8d\" (UniqueName: \"kubernetes.io/projected/686eac92-c672-4c4d-bf80-8e47a557a52c-kube-api-access-8vn8d\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.762709 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686eac92-c672-4c4d-bf80-8e47a557a52c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.806855 4860 generic.go:334] "Generic (PLEG): container finished" podID="4a70f78c-30ad-42bb-a8d6-c7ef144db4f2" containerID="add5c11796b070a656d2afb9e74285fba66aefae379a375aedc139ad9be4f8e6" exitCode=0 Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.806938 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.806940 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" event={"ID":"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2","Type":"ContainerDied","Data":"add5c11796b070a656d2afb9e74285fba66aefae379a375aedc139ad9be4f8e6"} Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.807097 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hdn58" event={"ID":"4a70f78c-30ad-42bb-a8d6-c7ef144db4f2","Type":"ContainerDied","Data":"75974b4bea4ac0ad23c0087839eb8bd5637ab6c44e79e7c080f03562f6cab356"} Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.807142 4860 scope.go:117] "RemoveContainer" containerID="add5c11796b070a656d2afb9e74285fba66aefae379a375aedc139ad9be4f8e6" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.810538 4860 generic.go:334] "Generic (PLEG): container finished" podID="686eac92-c672-4c4d-bf80-8e47a557a52c" containerID="5420340c9cb6bdacc3cb3c6b7e143326988b00f441e7ca6613332ba5e9205eb6" exitCode=0 Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.810590 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" event={"ID":"686eac92-c672-4c4d-bf80-8e47a557a52c","Type":"ContainerDied","Data":"5420340c9cb6bdacc3cb3c6b7e143326988b00f441e7ca6613332ba5e9205eb6"} Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.810632 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" event={"ID":"686eac92-c672-4c4d-bf80-8e47a557a52c","Type":"ContainerDied","Data":"c4acb6afc748f81f2514acfad525d3a4b4a2659f17c3375be01ff57d72da301a"} Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.810688 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.825946 4860 scope.go:117] "RemoveContainer" containerID="add5c11796b070a656d2afb9e74285fba66aefae379a375aedc139ad9be4f8e6" Jan 23 08:18:37 crc kubenswrapper[4860]: E0123 08:18:37.826403 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"add5c11796b070a656d2afb9e74285fba66aefae379a375aedc139ad9be4f8e6\": container with ID starting with add5c11796b070a656d2afb9e74285fba66aefae379a375aedc139ad9be4f8e6 not found: ID does not exist" containerID="add5c11796b070a656d2afb9e74285fba66aefae379a375aedc139ad9be4f8e6" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.826448 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"add5c11796b070a656d2afb9e74285fba66aefae379a375aedc139ad9be4f8e6"} err="failed to get container status \"add5c11796b070a656d2afb9e74285fba66aefae379a375aedc139ad9be4f8e6\": rpc error: code = NotFound desc = could not find container \"add5c11796b070a656d2afb9e74285fba66aefae379a375aedc139ad9be4f8e6\": container with ID starting with add5c11796b070a656d2afb9e74285fba66aefae379a375aedc139ad9be4f8e6 not found: ID does not exist" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.826480 4860 scope.go:117] "RemoveContainer" containerID="5420340c9cb6bdacc3cb3c6b7e143326988b00f441e7ca6613332ba5e9205eb6" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.835807 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv"] Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.838916 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m5qwv"] Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.841378 4860 scope.go:117] "RemoveContainer" containerID="5420340c9cb6bdacc3cb3c6b7e143326988b00f441e7ca6613332ba5e9205eb6" Jan 23 08:18:37 crc kubenswrapper[4860]: E0123 08:18:37.841720 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5420340c9cb6bdacc3cb3c6b7e143326988b00f441e7ca6613332ba5e9205eb6\": container with ID starting with 5420340c9cb6bdacc3cb3c6b7e143326988b00f441e7ca6613332ba5e9205eb6 not found: ID does not exist" containerID="5420340c9cb6bdacc3cb3c6b7e143326988b00f441e7ca6613332ba5e9205eb6" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.841755 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5420340c9cb6bdacc3cb3c6b7e143326988b00f441e7ca6613332ba5e9205eb6"} err="failed to get container status \"5420340c9cb6bdacc3cb3c6b7e143326988b00f441e7ca6613332ba5e9205eb6\": rpc error: code = NotFound desc = could not find container \"5420340c9cb6bdacc3cb3c6b7e143326988b00f441e7ca6613332ba5e9205eb6\": container with ID starting with 5420340c9cb6bdacc3cb3c6b7e143326988b00f441e7ca6613332ba5e9205eb6 not found: ID does not exist" Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.842116 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hdn58"] Jan 23 08:18:37 crc kubenswrapper[4860]: I0123 08:18:37.847567 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hdn58"] Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.353975 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57769dc656-zrvv7"] Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354296 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bd04fa-7e4e-4a22-9b93-418f22e296b2" containerName="extract-utilities" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354312 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bd04fa-7e4e-4a22-9b93-418f22e296b2" containerName="extract-utilities" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354325 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" containerName="extract-utilities" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354334 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" containerName="extract-utilities" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354351 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" containerName="extract-content" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354360 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" containerName="extract-content" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354371 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354378 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354393 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354401 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354414 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bd04fa-7e4e-4a22-9b93-418f22e296b2" containerName="extract-content" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354423 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bd04fa-7e4e-4a22-9b93-418f22e296b2" containerName="extract-content" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354434 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a70f78c-30ad-42bb-a8d6-c7ef144db4f2" containerName="controller-manager" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354442 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a70f78c-30ad-42bb-a8d6-c7ef144db4f2" containerName="controller-manager" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354455 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d8c162-6bfc-447f-b688-6f4e74687cd8" containerName="extract-content" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354463 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d8c162-6bfc-447f-b688-6f4e74687cd8" containerName="extract-content" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354476 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" containerName="extract-content" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354484 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" containerName="extract-content" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354496 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354503 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354517 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d8c162-6bfc-447f-b688-6f4e74687cd8" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354525 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d8c162-6bfc-447f-b688-6f4e74687cd8" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354539 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" containerName="extract-utilities" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354549 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" containerName="extract-utilities" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354558 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6938dfcf-ed32-45c3-a2a6-04f0b60315ac" containerName="extract-utilities" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354567 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6938dfcf-ed32-45c3-a2a6-04f0b60315ac" containerName="extract-utilities" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354578 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" containerName="extract-utilities" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354586 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" containerName="extract-utilities" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354596 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bd04fa-7e4e-4a22-9b93-418f22e296b2" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354604 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bd04fa-7e4e-4a22-9b93-418f22e296b2" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354613 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" containerName="extract-content" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354622 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" containerName="extract-content" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354633 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" containerName="extract-utilities" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354641 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" containerName="extract-utilities" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354651 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" containerName="extract-utilities" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354659 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" containerName="extract-utilities" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354670 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d8c162-6bfc-447f-b688-6f4e74687cd8" containerName="extract-utilities" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354678 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d8c162-6bfc-447f-b688-6f4e74687cd8" containerName="extract-utilities" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354690 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686eac92-c672-4c4d-bf80-8e47a557a52c" containerName="route-controller-manager" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354698 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="686eac92-c672-4c4d-bf80-8e47a557a52c" containerName="route-controller-manager" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354711 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354719 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354733 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6938dfcf-ed32-45c3-a2a6-04f0b60315ac" containerName="extract-content" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354741 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6938dfcf-ed32-45c3-a2a6-04f0b60315ac" containerName="extract-content" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354780 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2661d5fe-77bd-44ac-a136-5362fae787f8" containerName="marketplace-operator" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354790 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2661d5fe-77bd-44ac-a136-5362fae787f8" containerName="marketplace-operator" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354801 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" containerName="extract-content" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354809 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" containerName="extract-content" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354820 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" containerName="extract-content" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354828 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" containerName="extract-content" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354839 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354849 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: E0123 08:18:38.354862 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6938dfcf-ed32-45c3-a2a6-04f0b60315ac" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354870 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="6938dfcf-ed32-45c3-a2a6-04f0b60315ac" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.354992 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="13d8c162-6bfc-447f-b688-6f4e74687cd8" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.355009 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f61b81b4-d438-4fd0-a8c0-4f609b4d37c5" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.355048 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="94cbcca6-2f65-40ab-ab9f-37ac19db1f4b" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.355058 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="686eac92-c672-4c4d-bf80-8e47a557a52c" containerName="route-controller-manager" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.355068 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e87f7c-9b8e-49ce-bb6e-85889ec0b95f" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.355077 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="6938dfcf-ed32-45c3-a2a6-04f0b60315ac" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.355088 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce3742d9-24a7-4b15-9301-8d03596ae37b" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.355098 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="92e9a494-d0d1-4bb9-8cc9-86e044e7a75b" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.355109 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a70f78c-30ad-42bb-a8d6-c7ef144db4f2" containerName="controller-manager" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.355121 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2661d5fe-77bd-44ac-a136-5362fae787f8" containerName="marketplace-operator" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.355131 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bd04fa-7e4e-4a22-9b93-418f22e296b2" containerName="registry-server" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.355669 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.358838 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf"] Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.359657 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.362641 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.362749 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.362881 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.362929 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.363343 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.363523 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.363946 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.364349 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.364611 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.364773 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.364886 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.364935 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.370234 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.382253 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57769dc656-zrvv7"] Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.401779 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf"] Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.469098 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-client-ca\") pod \"controller-manager-57769dc656-zrvv7\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.469439 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f55dca2-1021-49bf-aeb5-e31f34365884-client-ca\") pod \"route-controller-manager-6fb4f49856-28hlf\" (UID: \"9f55dca2-1021-49bf-aeb5-e31f34365884\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.469576 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f55dca2-1021-49bf-aeb5-e31f34365884-config\") pod \"route-controller-manager-6fb4f49856-28hlf\" (UID: \"9f55dca2-1021-49bf-aeb5-e31f34365884\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.470085 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrzj\" (UniqueName: \"kubernetes.io/projected/8cfa6457-12a1-4319-a04f-020b725e3a8f-kube-api-access-jqrzj\") pod \"controller-manager-57769dc656-zrvv7\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.470228 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cfa6457-12a1-4319-a04f-020b725e3a8f-serving-cert\") pod \"controller-manager-57769dc656-zrvv7\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.470403 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-config\") pod \"controller-manager-57769dc656-zrvv7\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.470524 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-proxy-ca-bundles\") pod \"controller-manager-57769dc656-zrvv7\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.470640 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn8xw\" (UniqueName: \"kubernetes.io/projected/9f55dca2-1021-49bf-aeb5-e31f34365884-kube-api-access-gn8xw\") pod \"route-controller-manager-6fb4f49856-28hlf\" (UID: \"9f55dca2-1021-49bf-aeb5-e31f34365884\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.470765 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f55dca2-1021-49bf-aeb5-e31f34365884-serving-cert\") pod \"route-controller-manager-6fb4f49856-28hlf\" (UID: \"9f55dca2-1021-49bf-aeb5-e31f34365884\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.571749 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqrzj\" (UniqueName: \"kubernetes.io/projected/8cfa6457-12a1-4319-a04f-020b725e3a8f-kube-api-access-jqrzj\") pod \"controller-manager-57769dc656-zrvv7\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.572043 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cfa6457-12a1-4319-a04f-020b725e3a8f-serving-cert\") pod \"controller-manager-57769dc656-zrvv7\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.572398 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-config\") pod \"controller-manager-57769dc656-zrvv7\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.572547 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-proxy-ca-bundles\") pod \"controller-manager-57769dc656-zrvv7\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.572672 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn8xw\" (UniqueName: \"kubernetes.io/projected/9f55dca2-1021-49bf-aeb5-e31f34365884-kube-api-access-gn8xw\") pod \"route-controller-manager-6fb4f49856-28hlf\" (UID: \"9f55dca2-1021-49bf-aeb5-e31f34365884\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.572770 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f55dca2-1021-49bf-aeb5-e31f34365884-serving-cert\") pod \"route-controller-manager-6fb4f49856-28hlf\" (UID: \"9f55dca2-1021-49bf-aeb5-e31f34365884\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.572854 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-client-ca\") pod \"controller-manager-57769dc656-zrvv7\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.572945 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f55dca2-1021-49bf-aeb5-e31f34365884-client-ca\") pod \"route-controller-manager-6fb4f49856-28hlf\" (UID: \"9f55dca2-1021-49bf-aeb5-e31f34365884\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.573072 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f55dca2-1021-49bf-aeb5-e31f34365884-config\") pod \"route-controller-manager-6fb4f49856-28hlf\" (UID: \"9f55dca2-1021-49bf-aeb5-e31f34365884\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.573834 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-config\") pod \"controller-manager-57769dc656-zrvv7\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.573993 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f55dca2-1021-49bf-aeb5-e31f34365884-client-ca\") pod \"route-controller-manager-6fb4f49856-28hlf\" (UID: \"9f55dca2-1021-49bf-aeb5-e31f34365884\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.574617 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-proxy-ca-bundles\") pod \"controller-manager-57769dc656-zrvv7\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.574655 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f55dca2-1021-49bf-aeb5-e31f34365884-config\") pod \"route-controller-manager-6fb4f49856-28hlf\" (UID: \"9f55dca2-1021-49bf-aeb5-e31f34365884\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.574723 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-client-ca\") pod \"controller-manager-57769dc656-zrvv7\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.577267 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cfa6457-12a1-4319-a04f-020b725e3a8f-serving-cert\") pod \"controller-manager-57769dc656-zrvv7\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.579615 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f55dca2-1021-49bf-aeb5-e31f34365884-serving-cert\") pod \"route-controller-manager-6fb4f49856-28hlf\" (UID: \"9f55dca2-1021-49bf-aeb5-e31f34365884\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.588997 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqrzj\" (UniqueName: \"kubernetes.io/projected/8cfa6457-12a1-4319-a04f-020b725e3a8f-kube-api-access-jqrzj\") pod \"controller-manager-57769dc656-zrvv7\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.592261 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn8xw\" (UniqueName: \"kubernetes.io/projected/9f55dca2-1021-49bf-aeb5-e31f34365884-kube-api-access-gn8xw\") pod \"route-controller-manager-6fb4f49856-28hlf\" (UID: \"9f55dca2-1021-49bf-aeb5-e31f34365884\") " pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.683187 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.696036 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:38 crc kubenswrapper[4860]: I0123 08:18:38.914570 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf"] Jan 23 08:18:38 crc kubenswrapper[4860]: W0123 08:18:38.920402 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f55dca2_1021_49bf_aeb5_e31f34365884.slice/crio-683f203c09b00f46297128c51924a7d0922dcfd22ea3ba360f0924f63f7e719a WatchSource:0}: Error finding container 683f203c09b00f46297128c51924a7d0922dcfd22ea3ba360f0924f63f7e719a: Status 404 returned error can't find the container with id 683f203c09b00f46297128c51924a7d0922dcfd22ea3ba360f0924f63f7e719a Jan 23 08:18:39 crc kubenswrapper[4860]: I0123 08:18:39.077201 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57769dc656-zrvv7"] Jan 23 08:18:39 crc kubenswrapper[4860]: W0123 08:18:39.083686 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cfa6457_12a1_4319_a04f_020b725e3a8f.slice/crio-982b6c594fc2145df928c2128c9848b04892c1efd266640c636ff0ffc84519e7 WatchSource:0}: Error finding container 982b6c594fc2145df928c2128c9848b04892c1efd266640c636ff0ffc84519e7: Status 404 returned error can't find the container with id 982b6c594fc2145df928c2128c9848b04892c1efd266640c636ff0ffc84519e7 Jan 23 08:18:39 crc kubenswrapper[4860]: I0123 08:18:39.665092 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a70f78c-30ad-42bb-a8d6-c7ef144db4f2" path="/var/lib/kubelet/pods/4a70f78c-30ad-42bb-a8d6-c7ef144db4f2/volumes" Jan 23 08:18:39 crc kubenswrapper[4860]: I0123 08:18:39.665914 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="686eac92-c672-4c4d-bf80-8e47a557a52c" path="/var/lib/kubelet/pods/686eac92-c672-4c4d-bf80-8e47a557a52c/volumes" Jan 23 08:18:39 crc kubenswrapper[4860]: I0123 08:18:39.833059 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" event={"ID":"8cfa6457-12a1-4319-a04f-020b725e3a8f","Type":"ContainerStarted","Data":"0c673fa4fd40098b0057fd568b0b33560fde2a9f1721a0bbf5c7830e1930f98a"} Jan 23 08:18:39 crc kubenswrapper[4860]: I0123 08:18:39.833108 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" event={"ID":"8cfa6457-12a1-4319-a04f-020b725e3a8f","Type":"ContainerStarted","Data":"982b6c594fc2145df928c2128c9848b04892c1efd266640c636ff0ffc84519e7"} Jan 23 08:18:39 crc kubenswrapper[4860]: I0123 08:18:39.833383 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:39 crc kubenswrapper[4860]: I0123 08:18:39.835929 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" event={"ID":"9f55dca2-1021-49bf-aeb5-e31f34365884","Type":"ContainerStarted","Data":"904b933177f3c3822f1e5e5264b09deaec93a62a49ed8002243fa82d52ce38cc"} Jan 23 08:18:39 crc kubenswrapper[4860]: I0123 08:18:39.836075 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" event={"ID":"9f55dca2-1021-49bf-aeb5-e31f34365884","Type":"ContainerStarted","Data":"683f203c09b00f46297128c51924a7d0922dcfd22ea3ba360f0924f63f7e719a"} Jan 23 08:18:39 crc kubenswrapper[4860]: I0123 08:18:39.836173 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:39 crc kubenswrapper[4860]: I0123 08:18:39.844750 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:39 crc kubenswrapper[4860]: I0123 08:18:39.847610 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:39 crc kubenswrapper[4860]: I0123 08:18:39.883620 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" podStartSLOduration=2.883594429 podStartE2EDuration="2.883594429s" podCreationTimestamp="2026-01-23 08:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:18:39.863150034 +0000 UTC m=+226.491200219" watchObservedRunningTime="2026-01-23 08:18:39.883594429 +0000 UTC m=+226.511644624" Jan 23 08:18:39 crc kubenswrapper[4860]: I0123 08:18:39.912314 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" podStartSLOduration=2.91229631 podStartE2EDuration="2.91229631s" podCreationTimestamp="2026-01-23 08:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:18:39.886241995 +0000 UTC m=+226.514292210" watchObservedRunningTime="2026-01-23 08:18:39.91229631 +0000 UTC m=+226.540346485" Jan 23 08:18:41 crc kubenswrapper[4860]: I0123 08:18:41.269960 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57769dc656-zrvv7"] Jan 23 08:18:41 crc kubenswrapper[4860]: I0123 08:18:41.297161 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf"] Jan 23 08:18:42 crc kubenswrapper[4860]: I0123 08:18:42.849873 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" podUID="9f55dca2-1021-49bf-aeb5-e31f34365884" containerName="route-controller-manager" containerID="cri-o://904b933177f3c3822f1e5e5264b09deaec93a62a49ed8002243fa82d52ce38cc" gracePeriod=30 Jan 23 08:18:42 crc kubenswrapper[4860]: I0123 08:18:42.849953 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" podUID="8cfa6457-12a1-4319-a04f-020b725e3a8f" containerName="controller-manager" containerID="cri-o://0c673fa4fd40098b0057fd568b0b33560fde2a9f1721a0bbf5c7830e1930f98a" gracePeriod=30 Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.234709 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.262858 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn"] Jan 23 08:18:43 crc kubenswrapper[4860]: E0123 08:18:43.263156 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f55dca2-1021-49bf-aeb5-e31f34365884" containerName="route-controller-manager" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.263174 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f55dca2-1021-49bf-aeb5-e31f34365884" containerName="route-controller-manager" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.263344 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f55dca2-1021-49bf-aeb5-e31f34365884" containerName="route-controller-manager" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.264304 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.269473 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn"] Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.308147 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.431196 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-proxy-ca-bundles\") pod \"8cfa6457-12a1-4319-a04f-020b725e3a8f\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.431260 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-config\") pod \"8cfa6457-12a1-4319-a04f-020b725e3a8f\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.431299 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f55dca2-1021-49bf-aeb5-e31f34365884-serving-cert\") pod \"9f55dca2-1021-49bf-aeb5-e31f34365884\" (UID: \"9f55dca2-1021-49bf-aeb5-e31f34365884\") " Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.431340 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-client-ca\") pod \"8cfa6457-12a1-4319-a04f-020b725e3a8f\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.431364 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f55dca2-1021-49bf-aeb5-e31f34365884-client-ca\") pod \"9f55dca2-1021-49bf-aeb5-e31f34365884\" (UID: \"9f55dca2-1021-49bf-aeb5-e31f34365884\") " Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.431467 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cfa6457-12a1-4319-a04f-020b725e3a8f-serving-cert\") pod \"8cfa6457-12a1-4319-a04f-020b725e3a8f\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.431525 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn8xw\" (UniqueName: \"kubernetes.io/projected/9f55dca2-1021-49bf-aeb5-e31f34365884-kube-api-access-gn8xw\") pod \"9f55dca2-1021-49bf-aeb5-e31f34365884\" (UID: \"9f55dca2-1021-49bf-aeb5-e31f34365884\") " Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.431553 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f55dca2-1021-49bf-aeb5-e31f34365884-config\") pod \"9f55dca2-1021-49bf-aeb5-e31f34365884\" (UID: \"9f55dca2-1021-49bf-aeb5-e31f34365884\") " Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.431574 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqrzj\" (UniqueName: \"kubernetes.io/projected/8cfa6457-12a1-4319-a04f-020b725e3a8f-kube-api-access-jqrzj\") pod \"8cfa6457-12a1-4319-a04f-020b725e3a8f\" (UID: \"8cfa6457-12a1-4319-a04f-020b725e3a8f\") " Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.431872 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6a2e39a-e1be-447e-8375-e43e083ad1ad-client-ca\") pod \"route-controller-manager-657fdc8645-jrtxn\" (UID: \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\") " pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.431912 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6a2e39a-e1be-447e-8375-e43e083ad1ad-config\") pod \"route-controller-manager-657fdc8645-jrtxn\" (UID: \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\") " pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.431969 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzq6x\" (UniqueName: \"kubernetes.io/projected/a6a2e39a-e1be-447e-8375-e43e083ad1ad-kube-api-access-gzq6x\") pod \"route-controller-manager-657fdc8645-jrtxn\" (UID: \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\") " pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.432002 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6a2e39a-e1be-447e-8375-e43e083ad1ad-serving-cert\") pod \"route-controller-manager-657fdc8645-jrtxn\" (UID: \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\") " pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.432156 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-client-ca" (OuterVolumeSpecName: "client-ca") pod "8cfa6457-12a1-4319-a04f-020b725e3a8f" (UID: "8cfa6457-12a1-4319-a04f-020b725e3a8f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.432170 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8cfa6457-12a1-4319-a04f-020b725e3a8f" (UID: "8cfa6457-12a1-4319-a04f-020b725e3a8f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.432524 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f55dca2-1021-49bf-aeb5-e31f34365884-client-ca" (OuterVolumeSpecName: "client-ca") pod "9f55dca2-1021-49bf-aeb5-e31f34365884" (UID: "9f55dca2-1021-49bf-aeb5-e31f34365884"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.432683 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-config" (OuterVolumeSpecName: "config") pod "8cfa6457-12a1-4319-a04f-020b725e3a8f" (UID: "8cfa6457-12a1-4319-a04f-020b725e3a8f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.433997 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f55dca2-1021-49bf-aeb5-e31f34365884-config" (OuterVolumeSpecName: "config") pod "9f55dca2-1021-49bf-aeb5-e31f34365884" (UID: "9f55dca2-1021-49bf-aeb5-e31f34365884"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.436741 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cfa6457-12a1-4319-a04f-020b725e3a8f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cfa6457-12a1-4319-a04f-020b725e3a8f" (UID: "8cfa6457-12a1-4319-a04f-020b725e3a8f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.436954 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f55dca2-1021-49bf-aeb5-e31f34365884-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9f55dca2-1021-49bf-aeb5-e31f34365884" (UID: "9f55dca2-1021-49bf-aeb5-e31f34365884"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.437948 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f55dca2-1021-49bf-aeb5-e31f34365884-kube-api-access-gn8xw" (OuterVolumeSpecName: "kube-api-access-gn8xw") pod "9f55dca2-1021-49bf-aeb5-e31f34365884" (UID: "9f55dca2-1021-49bf-aeb5-e31f34365884"). InnerVolumeSpecName "kube-api-access-gn8xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.438463 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cfa6457-12a1-4319-a04f-020b725e3a8f-kube-api-access-jqrzj" (OuterVolumeSpecName: "kube-api-access-jqrzj") pod "8cfa6457-12a1-4319-a04f-020b725e3a8f" (UID: "8cfa6457-12a1-4319-a04f-020b725e3a8f"). InnerVolumeSpecName "kube-api-access-jqrzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.533618 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6a2e39a-e1be-447e-8375-e43e083ad1ad-client-ca\") pod \"route-controller-manager-657fdc8645-jrtxn\" (UID: \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\") " pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.533676 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6a2e39a-e1be-447e-8375-e43e083ad1ad-config\") pod \"route-controller-manager-657fdc8645-jrtxn\" (UID: \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\") " pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.533726 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzq6x\" (UniqueName: \"kubernetes.io/projected/a6a2e39a-e1be-447e-8375-e43e083ad1ad-kube-api-access-gzq6x\") pod \"route-controller-manager-657fdc8645-jrtxn\" (UID: \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\") " pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.533756 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6a2e39a-e1be-447e-8375-e43e083ad1ad-serving-cert\") pod \"route-controller-manager-657fdc8645-jrtxn\" (UID: \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\") " pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.533807 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f55dca2-1021-49bf-aeb5-e31f34365884-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.533821 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.533833 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9f55dca2-1021-49bf-aeb5-e31f34365884-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.533844 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cfa6457-12a1-4319-a04f-020b725e3a8f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.533855 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn8xw\" (UniqueName: \"kubernetes.io/projected/9f55dca2-1021-49bf-aeb5-e31f34365884-kube-api-access-gn8xw\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.533870 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f55dca2-1021-49bf-aeb5-e31f34365884-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.533880 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqrzj\" (UniqueName: \"kubernetes.io/projected/8cfa6457-12a1-4319-a04f-020b725e3a8f-kube-api-access-jqrzj\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.533892 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.533902 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfa6457-12a1-4319-a04f-020b725e3a8f-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.535552 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6a2e39a-e1be-447e-8375-e43e083ad1ad-client-ca\") pod \"route-controller-manager-657fdc8645-jrtxn\" (UID: \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\") " pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.535637 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6a2e39a-e1be-447e-8375-e43e083ad1ad-config\") pod \"route-controller-manager-657fdc8645-jrtxn\" (UID: \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\") " pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.539430 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6a2e39a-e1be-447e-8375-e43e083ad1ad-serving-cert\") pod \"route-controller-manager-657fdc8645-jrtxn\" (UID: \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\") " pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.557071 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzq6x\" (UniqueName: \"kubernetes.io/projected/a6a2e39a-e1be-447e-8375-e43e083ad1ad-kube-api-access-gzq6x\") pod \"route-controller-manager-657fdc8645-jrtxn\" (UID: \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\") " pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.607167 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.856746 4860 generic.go:334] "Generic (PLEG): container finished" podID="9f55dca2-1021-49bf-aeb5-e31f34365884" containerID="904b933177f3c3822f1e5e5264b09deaec93a62a49ed8002243fa82d52ce38cc" exitCode=0 Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.857203 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" event={"ID":"9f55dca2-1021-49bf-aeb5-e31f34365884","Type":"ContainerDied","Data":"904b933177f3c3822f1e5e5264b09deaec93a62a49ed8002243fa82d52ce38cc"} Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.857242 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" event={"ID":"9f55dca2-1021-49bf-aeb5-e31f34365884","Type":"ContainerDied","Data":"683f203c09b00f46297128c51924a7d0922dcfd22ea3ba360f0924f63f7e719a"} Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.857253 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.857306 4860 scope.go:117] "RemoveContainer" containerID="904b933177f3c3822f1e5e5264b09deaec93a62a49ed8002243fa82d52ce38cc" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.859052 4860 generic.go:334] "Generic (PLEG): container finished" podID="8cfa6457-12a1-4319-a04f-020b725e3a8f" containerID="0c673fa4fd40098b0057fd568b0b33560fde2a9f1721a0bbf5c7830e1930f98a" exitCode=0 Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.859075 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" event={"ID":"8cfa6457-12a1-4319-a04f-020b725e3a8f","Type":"ContainerDied","Data":"0c673fa4fd40098b0057fd568b0b33560fde2a9f1721a0bbf5c7830e1930f98a"} Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.859091 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" event={"ID":"8cfa6457-12a1-4319-a04f-020b725e3a8f","Type":"ContainerDied","Data":"982b6c594fc2145df928c2128c9848b04892c1efd266640c636ff0ffc84519e7"} Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.859162 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57769dc656-zrvv7" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.877178 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57769dc656-zrvv7"] Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.881666 4860 scope.go:117] "RemoveContainer" containerID="904b933177f3c3822f1e5e5264b09deaec93a62a49ed8002243fa82d52ce38cc" Jan 23 08:18:43 crc kubenswrapper[4860]: E0123 08:18:43.882130 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"904b933177f3c3822f1e5e5264b09deaec93a62a49ed8002243fa82d52ce38cc\": container with ID starting with 904b933177f3c3822f1e5e5264b09deaec93a62a49ed8002243fa82d52ce38cc not found: ID does not exist" containerID="904b933177f3c3822f1e5e5264b09deaec93a62a49ed8002243fa82d52ce38cc" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.882155 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"904b933177f3c3822f1e5e5264b09deaec93a62a49ed8002243fa82d52ce38cc"} err="failed to get container status \"904b933177f3c3822f1e5e5264b09deaec93a62a49ed8002243fa82d52ce38cc\": rpc error: code = NotFound desc = could not find container \"904b933177f3c3822f1e5e5264b09deaec93a62a49ed8002243fa82d52ce38cc\": container with ID starting with 904b933177f3c3822f1e5e5264b09deaec93a62a49ed8002243fa82d52ce38cc not found: ID does not exist" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.882174 4860 scope.go:117] "RemoveContainer" containerID="0c673fa4fd40098b0057fd568b0b33560fde2a9f1721a0bbf5c7830e1930f98a" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.882628 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-57769dc656-zrvv7"] Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.895912 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf"] Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.897174 4860 scope.go:117] "RemoveContainer" containerID="0c673fa4fd40098b0057fd568b0b33560fde2a9f1721a0bbf5c7830e1930f98a" Jan 23 08:18:43 crc kubenswrapper[4860]: E0123 08:18:43.898489 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c673fa4fd40098b0057fd568b0b33560fde2a9f1721a0bbf5c7830e1930f98a\": container with ID starting with 0c673fa4fd40098b0057fd568b0b33560fde2a9f1721a0bbf5c7830e1930f98a not found: ID does not exist" containerID="0c673fa4fd40098b0057fd568b0b33560fde2a9f1721a0bbf5c7830e1930f98a" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.898518 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c673fa4fd40098b0057fd568b0b33560fde2a9f1721a0bbf5c7830e1930f98a"} err="failed to get container status \"0c673fa4fd40098b0057fd568b0b33560fde2a9f1721a0bbf5c7830e1930f98a\": rpc error: code = NotFound desc = could not find container \"0c673fa4fd40098b0057fd568b0b33560fde2a9f1721a0bbf5c7830e1930f98a\": container with ID starting with 0c673fa4fd40098b0057fd568b0b33560fde2a9f1721a0bbf5c7830e1930f98a not found: ID does not exist" Jan 23 08:18:43 crc kubenswrapper[4860]: I0123 08:18:43.898562 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb4f49856-28hlf"] Jan 23 08:18:44 crc kubenswrapper[4860]: I0123 08:18:44.035168 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn"] Jan 23 08:18:44 crc kubenswrapper[4860]: W0123 08:18:44.042390 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6a2e39a_e1be_447e_8375_e43e083ad1ad.slice/crio-91c0ab833f7c7fee04a43e36e0d445e2940b69acce99748fd3060d0bc0c19641 WatchSource:0}: Error finding container 91c0ab833f7c7fee04a43e36e0d445e2940b69acce99748fd3060d0bc0c19641: Status 404 returned error can't find the container with id 91c0ab833f7c7fee04a43e36e0d445e2940b69acce99748fd3060d0bc0c19641 Jan 23 08:18:44 crc kubenswrapper[4860]: I0123 08:18:44.864921 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" event={"ID":"a6a2e39a-e1be-447e-8375-e43e083ad1ad","Type":"ContainerStarted","Data":"9625cff1b5d61776697f8cbef8129dd61c141d18ca9eabfcf168f11155b16f4e"} Jan 23 08:18:44 crc kubenswrapper[4860]: I0123 08:18:44.865226 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" event={"ID":"a6a2e39a-e1be-447e-8375-e43e083ad1ad","Type":"ContainerStarted","Data":"91c0ab833f7c7fee04a43e36e0d445e2940b69acce99748fd3060d0bc0c19641"} Jan 23 08:18:44 crc kubenswrapper[4860]: I0123 08:18:44.882195 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" podStartSLOduration=3.882173307 podStartE2EDuration="3.882173307s" podCreationTimestamp="2026-01-23 08:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:18:44.880381583 +0000 UTC m=+231.508431768" watchObservedRunningTime="2026-01-23 08:18:44.882173307 +0000 UTC m=+231.510223512" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.357940 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-574c848897-hgccn"] Jan 23 08:18:45 crc kubenswrapper[4860]: E0123 08:18:45.358170 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cfa6457-12a1-4319-a04f-020b725e3a8f" containerName="controller-manager" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.358184 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cfa6457-12a1-4319-a04f-020b725e3a8f" containerName="controller-manager" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.358300 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cfa6457-12a1-4319-a04f-020b725e3a8f" containerName="controller-manager" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.358652 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.361320 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.362749 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.362977 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.363712 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.363803 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.366781 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.372951 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.379406 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-574c848897-hgccn"] Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.555328 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlbfg\" (UniqueName: \"kubernetes.io/projected/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-kube-api-access-vlbfg\") pod \"controller-manager-574c848897-hgccn\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.555379 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-proxy-ca-bundles\") pod \"controller-manager-574c848897-hgccn\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.555411 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-config\") pod \"controller-manager-574c848897-hgccn\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.555539 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-client-ca\") pod \"controller-manager-574c848897-hgccn\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.555639 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-serving-cert\") pod \"controller-manager-574c848897-hgccn\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.656690 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-client-ca\") pod \"controller-manager-574c848897-hgccn\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.657116 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-serving-cert\") pod \"controller-manager-574c848897-hgccn\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.657161 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlbfg\" (UniqueName: \"kubernetes.io/projected/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-kube-api-access-vlbfg\") pod \"controller-manager-574c848897-hgccn\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.657192 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-proxy-ca-bundles\") pod \"controller-manager-574c848897-hgccn\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.657220 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-config\") pod \"controller-manager-574c848897-hgccn\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.657704 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-client-ca\") pod \"controller-manager-574c848897-hgccn\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.658749 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-config\") pod \"controller-manager-574c848897-hgccn\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.659546 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-proxy-ca-bundles\") pod \"controller-manager-574c848897-hgccn\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.663775 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cfa6457-12a1-4319-a04f-020b725e3a8f" path="/var/lib/kubelet/pods/8cfa6457-12a1-4319-a04f-020b725e3a8f/volumes" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.664542 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f55dca2-1021-49bf-aeb5-e31f34365884" path="/var/lib/kubelet/pods/9f55dca2-1021-49bf-aeb5-e31f34365884/volumes" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.668846 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-serving-cert\") pod \"controller-manager-574c848897-hgccn\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.675260 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlbfg\" (UniqueName: \"kubernetes.io/projected/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-kube-api-access-vlbfg\") pod \"controller-manager-574c848897-hgccn\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.682894 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.872412 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.877680 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:45 crc kubenswrapper[4860]: I0123 08:18:45.933249 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-574c848897-hgccn"] Jan 23 08:18:45 crc kubenswrapper[4860]: W0123 08:18:45.939466 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0ce396e_bd86_4ae0_98f5_3dd1514b4a2e.slice/crio-18dee3bb9012e33aaef154ba1376a11846b78419aa7243e4ff136092765acef4 WatchSource:0}: Error finding container 18dee3bb9012e33aaef154ba1376a11846b78419aa7243e4ff136092765acef4: Status 404 returned error can't find the container with id 18dee3bb9012e33aaef154ba1376a11846b78419aa7243e4ff136092765acef4 Jan 23 08:18:46 crc kubenswrapper[4860]: I0123 08:18:46.876603 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-574c848897-hgccn" event={"ID":"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e","Type":"ContainerStarted","Data":"064049e15f65c04544acb79dfd820e827b9c1c9bafd43811d73f75792d5b57f2"} Jan 23 08:18:46 crc kubenswrapper[4860]: I0123 08:18:46.876964 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:46 crc kubenswrapper[4860]: I0123 08:18:46.876976 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-574c848897-hgccn" event={"ID":"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e","Type":"ContainerStarted","Data":"18dee3bb9012e33aaef154ba1376a11846b78419aa7243e4ff136092765acef4"} Jan 23 08:18:46 crc kubenswrapper[4860]: I0123 08:18:46.881237 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:18:46 crc kubenswrapper[4860]: I0123 08:18:46.920091 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-574c848897-hgccn" podStartSLOduration=5.920068793 podStartE2EDuration="5.920068793s" podCreationTimestamp="2026-01-23 08:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:18:46.897254309 +0000 UTC m=+233.525304514" watchObservedRunningTime="2026-01-23 08:18:46.920068793 +0000 UTC m=+233.548118978" Jan 23 08:18:56 crc kubenswrapper[4860]: I0123 08:18:56.924678 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dp8mj"] Jan 23 08:18:56 crc kubenswrapper[4860]: I0123 08:18:56.926114 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dp8mj" Jan 23 08:18:56 crc kubenswrapper[4860]: I0123 08:18:56.927975 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 08:18:56 crc kubenswrapper[4860]: I0123 08:18:56.946701 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dp8mj"] Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.030774 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn"] Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.031104 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" podUID="a6a2e39a-e1be-447e-8375-e43e083ad1ad" containerName="route-controller-manager" containerID="cri-o://9625cff1b5d61776697f8cbef8129dd61c141d18ca9eabfcf168f11155b16f4e" gracePeriod=30 Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.090389 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2afd08be-90e1-4877-8ef2-249d867ad2c6-utilities\") pod \"redhat-operators-dp8mj\" (UID: \"2afd08be-90e1-4877-8ef2-249d867ad2c6\") " pod="openshift-marketplace/redhat-operators-dp8mj" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.090472 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2afd08be-90e1-4877-8ef2-249d867ad2c6-catalog-content\") pod \"redhat-operators-dp8mj\" (UID: \"2afd08be-90e1-4877-8ef2-249d867ad2c6\") " pod="openshift-marketplace/redhat-operators-dp8mj" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.090523 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtgvr\" (UniqueName: \"kubernetes.io/projected/2afd08be-90e1-4877-8ef2-249d867ad2c6-kube-api-access-dtgvr\") pod \"redhat-operators-dp8mj\" (UID: \"2afd08be-90e1-4877-8ef2-249d867ad2c6\") " pod="openshift-marketplace/redhat-operators-dp8mj" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.192093 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2afd08be-90e1-4877-8ef2-249d867ad2c6-utilities\") pod \"redhat-operators-dp8mj\" (UID: \"2afd08be-90e1-4877-8ef2-249d867ad2c6\") " pod="openshift-marketplace/redhat-operators-dp8mj" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.192156 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2afd08be-90e1-4877-8ef2-249d867ad2c6-catalog-content\") pod \"redhat-operators-dp8mj\" (UID: \"2afd08be-90e1-4877-8ef2-249d867ad2c6\") " pod="openshift-marketplace/redhat-operators-dp8mj" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.192191 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtgvr\" (UniqueName: \"kubernetes.io/projected/2afd08be-90e1-4877-8ef2-249d867ad2c6-kube-api-access-dtgvr\") pod \"redhat-operators-dp8mj\" (UID: \"2afd08be-90e1-4877-8ef2-249d867ad2c6\") " pod="openshift-marketplace/redhat-operators-dp8mj" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.192653 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2afd08be-90e1-4877-8ef2-249d867ad2c6-catalog-content\") pod \"redhat-operators-dp8mj\" (UID: \"2afd08be-90e1-4877-8ef2-249d867ad2c6\") " pod="openshift-marketplace/redhat-operators-dp8mj" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.192678 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2afd08be-90e1-4877-8ef2-249d867ad2c6-utilities\") pod \"redhat-operators-dp8mj\" (UID: \"2afd08be-90e1-4877-8ef2-249d867ad2c6\") " pod="openshift-marketplace/redhat-operators-dp8mj" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.213497 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtgvr\" (UniqueName: \"kubernetes.io/projected/2afd08be-90e1-4877-8ef2-249d867ad2c6-kube-api-access-dtgvr\") pod \"redhat-operators-dp8mj\" (UID: \"2afd08be-90e1-4877-8ef2-249d867ad2c6\") " pod="openshift-marketplace/redhat-operators-dp8mj" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.248504 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dp8mj" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.668418 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dp8mj"] Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.696140 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.701970 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6a2e39a-e1be-447e-8375-e43e083ad1ad-config\") pod \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\" (UID: \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\") " Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.702270 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzq6x\" (UniqueName: \"kubernetes.io/projected/a6a2e39a-e1be-447e-8375-e43e083ad1ad-kube-api-access-gzq6x\") pod \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\" (UID: \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\") " Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.702319 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6a2e39a-e1be-447e-8375-e43e083ad1ad-client-ca\") pod \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\" (UID: \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\") " Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.702366 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6a2e39a-e1be-447e-8375-e43e083ad1ad-serving-cert\") pod \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\" (UID: \"a6a2e39a-e1be-447e-8375-e43e083ad1ad\") " Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.703250 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6a2e39a-e1be-447e-8375-e43e083ad1ad-client-ca" (OuterVolumeSpecName: "client-ca") pod "a6a2e39a-e1be-447e-8375-e43e083ad1ad" (UID: "a6a2e39a-e1be-447e-8375-e43e083ad1ad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.703302 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6a2e39a-e1be-447e-8375-e43e083ad1ad-config" (OuterVolumeSpecName: "config") pod "a6a2e39a-e1be-447e-8375-e43e083ad1ad" (UID: "a6a2e39a-e1be-447e-8375-e43e083ad1ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.709717 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6a2e39a-e1be-447e-8375-e43e083ad1ad-kube-api-access-gzq6x" (OuterVolumeSpecName: "kube-api-access-gzq6x") pod "a6a2e39a-e1be-447e-8375-e43e083ad1ad" (UID: "a6a2e39a-e1be-447e-8375-e43e083ad1ad"). InnerVolumeSpecName "kube-api-access-gzq6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.710249 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6a2e39a-e1be-447e-8375-e43e083ad1ad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a6a2e39a-e1be-447e-8375-e43e083ad1ad" (UID: "a6a2e39a-e1be-447e-8375-e43e083ad1ad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.803841 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6a2e39a-e1be-447e-8375-e43e083ad1ad-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.803879 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzq6x\" (UniqueName: \"kubernetes.io/projected/a6a2e39a-e1be-447e-8375-e43e083ad1ad-kube-api-access-gzq6x\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.803893 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6a2e39a-e1be-447e-8375-e43e083ad1ad-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.803903 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6a2e39a-e1be-447e-8375-e43e083ad1ad-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.933202 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp8mj" event={"ID":"2afd08be-90e1-4877-8ef2-249d867ad2c6","Type":"ContainerStarted","Data":"68c2a535c7da353a7d2f1827addfb06eaa0bea737a9f049d5b22dcb454455a1a"} Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.934380 4860 generic.go:334] "Generic (PLEG): container finished" podID="a6a2e39a-e1be-447e-8375-e43e083ad1ad" containerID="9625cff1b5d61776697f8cbef8129dd61c141d18ca9eabfcf168f11155b16f4e" exitCode=0 Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.934429 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" event={"ID":"a6a2e39a-e1be-447e-8375-e43e083ad1ad","Type":"ContainerDied","Data":"9625cff1b5d61776697f8cbef8129dd61c141d18ca9eabfcf168f11155b16f4e"} Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.934457 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" event={"ID":"a6a2e39a-e1be-447e-8375-e43e083ad1ad","Type":"ContainerDied","Data":"91c0ab833f7c7fee04a43e36e0d445e2940b69acce99748fd3060d0bc0c19641"} Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.934484 4860 scope.go:117] "RemoveContainer" containerID="9625cff1b5d61776697f8cbef8129dd61c141d18ca9eabfcf168f11155b16f4e" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.934635 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.956976 4860 scope.go:117] "RemoveContainer" containerID="9625cff1b5d61776697f8cbef8129dd61c141d18ca9eabfcf168f11155b16f4e" Jan 23 08:18:57 crc kubenswrapper[4860]: E0123 08:18:57.958246 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9625cff1b5d61776697f8cbef8129dd61c141d18ca9eabfcf168f11155b16f4e\": container with ID starting with 9625cff1b5d61776697f8cbef8129dd61c141d18ca9eabfcf168f11155b16f4e not found: ID does not exist" containerID="9625cff1b5d61776697f8cbef8129dd61c141d18ca9eabfcf168f11155b16f4e" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.958323 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9625cff1b5d61776697f8cbef8129dd61c141d18ca9eabfcf168f11155b16f4e"} err="failed to get container status \"9625cff1b5d61776697f8cbef8129dd61c141d18ca9eabfcf168f11155b16f4e\": rpc error: code = NotFound desc = could not find container \"9625cff1b5d61776697f8cbef8129dd61c141d18ca9eabfcf168f11155b16f4e\": container with ID starting with 9625cff1b5d61776697f8cbef8129dd61c141d18ca9eabfcf168f11155b16f4e not found: ID does not exist" Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.963708 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn"] Jan 23 08:18:57 crc kubenswrapper[4860]: I0123 08:18:57.966580 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657fdc8645-jrtxn"] Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.371660 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq"] Jan 23 08:18:58 crc kubenswrapper[4860]: E0123 08:18:58.371985 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a2e39a-e1be-447e-8375-e43e083ad1ad" containerName="route-controller-manager" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.372008 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a2e39a-e1be-447e-8375-e43e083ad1ad" containerName="route-controller-manager" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.372217 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6a2e39a-e1be-447e-8375-e43e083ad1ad" containerName="route-controller-manager" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.372779 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.376700 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.377092 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.377229 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.377529 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.378193 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.382481 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.386071 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq"] Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.411993 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9686e541-8097-414b-bbce-732010f01472-serving-cert\") pod \"route-controller-manager-549c9c86cb-t9wsq\" (UID: \"9686e541-8097-414b-bbce-732010f01472\") " pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.412087 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9686e541-8097-414b-bbce-732010f01472-config\") pod \"route-controller-manager-549c9c86cb-t9wsq\" (UID: \"9686e541-8097-414b-bbce-732010f01472\") " pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.412152 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bss77\" (UniqueName: \"kubernetes.io/projected/9686e541-8097-414b-bbce-732010f01472-kube-api-access-bss77\") pod \"route-controller-manager-549c9c86cb-t9wsq\" (UID: \"9686e541-8097-414b-bbce-732010f01472\") " pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.412192 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9686e541-8097-414b-bbce-732010f01472-client-ca\") pod \"route-controller-manager-549c9c86cb-t9wsq\" (UID: \"9686e541-8097-414b-bbce-732010f01472\") " pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.513207 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9686e541-8097-414b-bbce-732010f01472-config\") pod \"route-controller-manager-549c9c86cb-t9wsq\" (UID: \"9686e541-8097-414b-bbce-732010f01472\") " pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.513299 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bss77\" (UniqueName: \"kubernetes.io/projected/9686e541-8097-414b-bbce-732010f01472-kube-api-access-bss77\") pod \"route-controller-manager-549c9c86cb-t9wsq\" (UID: \"9686e541-8097-414b-bbce-732010f01472\") " pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.513377 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9686e541-8097-414b-bbce-732010f01472-client-ca\") pod \"route-controller-manager-549c9c86cb-t9wsq\" (UID: \"9686e541-8097-414b-bbce-732010f01472\") " pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.513539 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9686e541-8097-414b-bbce-732010f01472-serving-cert\") pod \"route-controller-manager-549c9c86cb-t9wsq\" (UID: \"9686e541-8097-414b-bbce-732010f01472\") " pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.514747 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9686e541-8097-414b-bbce-732010f01472-client-ca\") pod \"route-controller-manager-549c9c86cb-t9wsq\" (UID: \"9686e541-8097-414b-bbce-732010f01472\") " pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.516123 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9686e541-8097-414b-bbce-732010f01472-config\") pod \"route-controller-manager-549c9c86cb-t9wsq\" (UID: \"9686e541-8097-414b-bbce-732010f01472\") " pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.522149 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9686e541-8097-414b-bbce-732010f01472-serving-cert\") pod \"route-controller-manager-549c9c86cb-t9wsq\" (UID: \"9686e541-8097-414b-bbce-732010f01472\") " pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.542854 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bss77\" (UniqueName: \"kubernetes.io/projected/9686e541-8097-414b-bbce-732010f01472-kube-api-access-bss77\") pod \"route-controller-manager-549c9c86cb-t9wsq\" (UID: \"9686e541-8097-414b-bbce-732010f01472\") " pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.693306 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.729812 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qww5h"] Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.731581 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.734639 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.744312 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qww5h"] Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.818836 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-catalog-content\") pod \"redhat-marketplace-qww5h\" (UID: \"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f\") " pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.818883 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-utilities\") pod \"redhat-marketplace-qww5h\" (UID: \"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f\") " pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.818992 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tvxs\" (UniqueName: \"kubernetes.io/projected/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-kube-api-access-4tvxs\") pod \"redhat-marketplace-qww5h\" (UID: \"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f\") " pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.923046 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-catalog-content\") pod \"redhat-marketplace-qww5h\" (UID: \"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f\") " pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.923108 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-utilities\") pod \"redhat-marketplace-qww5h\" (UID: \"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f\") " pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.923146 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tvxs\" (UniqueName: \"kubernetes.io/projected/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-kube-api-access-4tvxs\") pod \"redhat-marketplace-qww5h\" (UID: \"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f\") " pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.923607 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-catalog-content\") pod \"redhat-marketplace-qww5h\" (UID: \"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f\") " pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.923791 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-utilities\") pod \"redhat-marketplace-qww5h\" (UID: \"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f\") " pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.940205 4860 generic.go:334] "Generic (PLEG): container finished" podID="2afd08be-90e1-4877-8ef2-249d867ad2c6" containerID="cd4a8e4bbfdc66c397a2b4053237d26df18b82d269c187670f1ca52cfca820de" exitCode=0 Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.940264 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp8mj" event={"ID":"2afd08be-90e1-4877-8ef2-249d867ad2c6","Type":"ContainerDied","Data":"cd4a8e4bbfdc66c397a2b4053237d26df18b82d269c187670f1ca52cfca820de"} Jan 23 08:18:58 crc kubenswrapper[4860]: I0123 08:18:58.943725 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tvxs\" (UniqueName: \"kubernetes.io/projected/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-kube-api-access-4tvxs\") pod \"redhat-marketplace-qww5h\" (UID: \"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f\") " pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.107490 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.119503 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq"] Jan 23 08:18:59 crc kubenswrapper[4860]: W0123 08:18:59.130848 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9686e541_8097_414b_bbce_732010f01472.slice/crio-9025cc8bc514bfe2250d2c93f16746153a7e9608e9526748453bc7565af2b3cc WatchSource:0}: Error finding container 9025cc8bc514bfe2250d2c93f16746153a7e9608e9526748453bc7565af2b3cc: Status 404 returned error can't find the container with id 9025cc8bc514bfe2250d2c93f16746153a7e9608e9526748453bc7565af2b3cc Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.321441 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ct7q8"] Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.322768 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ct7q8" Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.326792 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.327304 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg8xh\" (UniqueName: \"kubernetes.io/projected/87d48386-02fa-481d-81a2-0d96e1b4dd5b-kube-api-access-kg8xh\") pod \"community-operators-ct7q8\" (UID: \"87d48386-02fa-481d-81a2-0d96e1b4dd5b\") " pod="openshift-marketplace/community-operators-ct7q8" Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.327350 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d48386-02fa-481d-81a2-0d96e1b4dd5b-catalog-content\") pod \"community-operators-ct7q8\" (UID: \"87d48386-02fa-481d-81a2-0d96e1b4dd5b\") " pod="openshift-marketplace/community-operators-ct7q8" Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.327402 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d48386-02fa-481d-81a2-0d96e1b4dd5b-utilities\") pod \"community-operators-ct7q8\" (UID: \"87d48386-02fa-481d-81a2-0d96e1b4dd5b\") " pod="openshift-marketplace/community-operators-ct7q8" Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.330415 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ct7q8"] Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.427964 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg8xh\" (UniqueName: \"kubernetes.io/projected/87d48386-02fa-481d-81a2-0d96e1b4dd5b-kube-api-access-kg8xh\") pod \"community-operators-ct7q8\" (UID: \"87d48386-02fa-481d-81a2-0d96e1b4dd5b\") " pod="openshift-marketplace/community-operators-ct7q8" Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.428041 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d48386-02fa-481d-81a2-0d96e1b4dd5b-catalog-content\") pod \"community-operators-ct7q8\" (UID: \"87d48386-02fa-481d-81a2-0d96e1b4dd5b\") " pod="openshift-marketplace/community-operators-ct7q8" Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.428110 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d48386-02fa-481d-81a2-0d96e1b4dd5b-utilities\") pod \"community-operators-ct7q8\" (UID: \"87d48386-02fa-481d-81a2-0d96e1b4dd5b\") " pod="openshift-marketplace/community-operators-ct7q8" Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.428954 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87d48386-02fa-481d-81a2-0d96e1b4dd5b-utilities\") pod \"community-operators-ct7q8\" (UID: \"87d48386-02fa-481d-81a2-0d96e1b4dd5b\") " pod="openshift-marketplace/community-operators-ct7q8" Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.429169 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87d48386-02fa-481d-81a2-0d96e1b4dd5b-catalog-content\") pod \"community-operators-ct7q8\" (UID: \"87d48386-02fa-481d-81a2-0d96e1b4dd5b\") " pod="openshift-marketplace/community-operators-ct7q8" Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.462754 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg8xh\" (UniqueName: \"kubernetes.io/projected/87d48386-02fa-481d-81a2-0d96e1b4dd5b-kube-api-access-kg8xh\") pod \"community-operators-ct7q8\" (UID: \"87d48386-02fa-481d-81a2-0d96e1b4dd5b\") " pod="openshift-marketplace/community-operators-ct7q8" Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.521676 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qww5h"] Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.638164 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ct7q8" Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.663794 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6a2e39a-e1be-447e-8375-e43e083ad1ad" path="/var/lib/kubelet/pods/a6a2e39a-e1be-447e-8375-e43e083ad1ad/volumes" Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.949429 4860 generic.go:334] "Generic (PLEG): container finished" podID="f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f" containerID="c8c6b84e42977c45726144f6d6b9fd04704822ebccec28355e9555851e6b49e4" exitCode=0 Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.949492 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qww5h" event={"ID":"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f","Type":"ContainerDied","Data":"c8c6b84e42977c45726144f6d6b9fd04704822ebccec28355e9555851e6b49e4"} Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.949517 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qww5h" event={"ID":"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f","Type":"ContainerStarted","Data":"53fb266b9bbf35cc9dceb88c8477d070c7bc540719471fb24264a698912b5f8e"} Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.951613 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp8mj" event={"ID":"2afd08be-90e1-4877-8ef2-249d867ad2c6","Type":"ContainerStarted","Data":"0d8c14943f99bb68866edbab50c6a409bdeb823039df72e0d0db833ccfdea08a"} Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.953701 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" event={"ID":"9686e541-8097-414b-bbce-732010f01472","Type":"ContainerStarted","Data":"4142f7fed4ee8720eb0d139ff168311c8f18106d429aebc36aeb68ed6a4368ed"} Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.953760 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" event={"ID":"9686e541-8097-414b-bbce-732010f01472","Type":"ContainerStarted","Data":"9025cc8bc514bfe2250d2c93f16746153a7e9608e9526748453bc7565af2b3cc"} Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.953992 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" Jan 23 08:18:59 crc kubenswrapper[4860]: I0123 08:18:59.959201 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" Jan 23 08:19:00 crc kubenswrapper[4860]: I0123 08:19:00.047616 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-549c9c86cb-t9wsq" podStartSLOduration=3.047597412 podStartE2EDuration="3.047597412s" podCreationTimestamp="2026-01-23 08:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:18:59.985401493 +0000 UTC m=+246.613451678" watchObservedRunningTime="2026-01-23 08:19:00.047597412 +0000 UTC m=+246.675647597" Jan 23 08:19:00 crc kubenswrapper[4860]: I0123 08:19:00.068667 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ct7q8"] Jan 23 08:19:00 crc kubenswrapper[4860]: W0123 08:19:00.074697 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87d48386_02fa_481d_81a2_0d96e1b4dd5b.slice/crio-47eacb7fcd973ccfa88f573e20f149622c5424ef09de2a336bb82b5ace797763 WatchSource:0}: Error finding container 47eacb7fcd973ccfa88f573e20f149622c5424ef09de2a336bb82b5ace797763: Status 404 returned error can't find the container with id 47eacb7fcd973ccfa88f573e20f149622c5424ef09de2a336bb82b5ace797763 Jan 23 08:19:00 crc kubenswrapper[4860]: I0123 08:19:00.961294 4860 generic.go:334] "Generic (PLEG): container finished" podID="f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f" containerID="6aa55b4c49f6930f63971969119e6da2aefa9fdc9fee3bab17aa0145664f9a79" exitCode=0 Jan 23 08:19:00 crc kubenswrapper[4860]: I0123 08:19:00.961362 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qww5h" event={"ID":"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f","Type":"ContainerDied","Data":"6aa55b4c49f6930f63971969119e6da2aefa9fdc9fee3bab17aa0145664f9a79"} Jan 23 08:19:00 crc kubenswrapper[4860]: I0123 08:19:00.964130 4860 generic.go:334] "Generic (PLEG): container finished" podID="2afd08be-90e1-4877-8ef2-249d867ad2c6" containerID="0d8c14943f99bb68866edbab50c6a409bdeb823039df72e0d0db833ccfdea08a" exitCode=0 Jan 23 08:19:00 crc kubenswrapper[4860]: I0123 08:19:00.964220 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp8mj" event={"ID":"2afd08be-90e1-4877-8ef2-249d867ad2c6","Type":"ContainerDied","Data":"0d8c14943f99bb68866edbab50c6a409bdeb823039df72e0d0db833ccfdea08a"} Jan 23 08:19:00 crc kubenswrapper[4860]: I0123 08:19:00.965884 4860 generic.go:334] "Generic (PLEG): container finished" podID="87d48386-02fa-481d-81a2-0d96e1b4dd5b" containerID="bf2ea994cebfd20abf38f3300b65f09a8a09947b203cf86d5b95cdcc313d4b36" exitCode=0 Jan 23 08:19:00 crc kubenswrapper[4860]: I0123 08:19:00.966059 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ct7q8" event={"ID":"87d48386-02fa-481d-81a2-0d96e1b4dd5b","Type":"ContainerDied","Data":"bf2ea994cebfd20abf38f3300b65f09a8a09947b203cf86d5b95cdcc313d4b36"} Jan 23 08:19:00 crc kubenswrapper[4860]: I0123 08:19:00.966099 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ct7q8" event={"ID":"87d48386-02fa-481d-81a2-0d96e1b4dd5b","Type":"ContainerStarted","Data":"47eacb7fcd973ccfa88f573e20f149622c5424ef09de2a336bb82b5ace797763"} Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.118742 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8lnlb"] Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.119644 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lnlb" Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.125989 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.138000 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lnlb"] Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.253664 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/308edb1c-e0d1-4e4f-8452-937ce8fd192f-catalog-content\") pod \"certified-operators-8lnlb\" (UID: \"308edb1c-e0d1-4e4f-8452-937ce8fd192f\") " pod="openshift-marketplace/certified-operators-8lnlb" Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.253780 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/308edb1c-e0d1-4e4f-8452-937ce8fd192f-utilities\") pod \"certified-operators-8lnlb\" (UID: \"308edb1c-e0d1-4e4f-8452-937ce8fd192f\") " pod="openshift-marketplace/certified-operators-8lnlb" Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.253832 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p4sn\" (UniqueName: \"kubernetes.io/projected/308edb1c-e0d1-4e4f-8452-937ce8fd192f-kube-api-access-9p4sn\") pod \"certified-operators-8lnlb\" (UID: \"308edb1c-e0d1-4e4f-8452-937ce8fd192f\") " pod="openshift-marketplace/certified-operators-8lnlb" Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.354641 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/308edb1c-e0d1-4e4f-8452-937ce8fd192f-catalog-content\") pod \"certified-operators-8lnlb\" (UID: \"308edb1c-e0d1-4e4f-8452-937ce8fd192f\") " pod="openshift-marketplace/certified-operators-8lnlb" Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.354736 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/308edb1c-e0d1-4e4f-8452-937ce8fd192f-utilities\") pod \"certified-operators-8lnlb\" (UID: \"308edb1c-e0d1-4e4f-8452-937ce8fd192f\") " pod="openshift-marketplace/certified-operators-8lnlb" Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.354780 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p4sn\" (UniqueName: \"kubernetes.io/projected/308edb1c-e0d1-4e4f-8452-937ce8fd192f-kube-api-access-9p4sn\") pod \"certified-operators-8lnlb\" (UID: \"308edb1c-e0d1-4e4f-8452-937ce8fd192f\") " pod="openshift-marketplace/certified-operators-8lnlb" Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.355269 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/308edb1c-e0d1-4e4f-8452-937ce8fd192f-catalog-content\") pod \"certified-operators-8lnlb\" (UID: \"308edb1c-e0d1-4e4f-8452-937ce8fd192f\") " pod="openshift-marketplace/certified-operators-8lnlb" Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.355444 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/308edb1c-e0d1-4e4f-8452-937ce8fd192f-utilities\") pod \"certified-operators-8lnlb\" (UID: \"308edb1c-e0d1-4e4f-8452-937ce8fd192f\") " pod="openshift-marketplace/certified-operators-8lnlb" Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.374992 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p4sn\" (UniqueName: \"kubernetes.io/projected/308edb1c-e0d1-4e4f-8452-937ce8fd192f-kube-api-access-9p4sn\") pod \"certified-operators-8lnlb\" (UID: \"308edb1c-e0d1-4e4f-8452-937ce8fd192f\") " pod="openshift-marketplace/certified-operators-8lnlb" Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.435693 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lnlb" Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.813232 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lnlb"] Jan 23 08:19:01 crc kubenswrapper[4860]: W0123 08:19:01.821512 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod308edb1c_e0d1_4e4f_8452_937ce8fd192f.slice/crio-42628d5c112a442acc565c15d275e0b745e80bd419144b4843424436180ac572 WatchSource:0}: Error finding container 42628d5c112a442acc565c15d275e0b745e80bd419144b4843424436180ac572: Status 404 returned error can't find the container with id 42628d5c112a442acc565c15d275e0b745e80bd419144b4843424436180ac572 Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.979156 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ct7q8" event={"ID":"87d48386-02fa-481d-81a2-0d96e1b4dd5b","Type":"ContainerStarted","Data":"d4e0f7fb4f8ac7838670fa53b7e41fb30f54faf3350043daecf601a2375bc9de"} Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.985967 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qww5h" event={"ID":"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f","Type":"ContainerStarted","Data":"2ed4f5fccd8e0a467ece10ec8578b3d48a6f26f312265e9e99191c4933fc9057"} Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.988013 4860 generic.go:334] "Generic (PLEG): container finished" podID="308edb1c-e0d1-4e4f-8452-937ce8fd192f" containerID="f0cf814017822e8e8fdd79857c2ffa495fd3ed72a2f50f16e9b8d0b3cba6a3e3" exitCode=0 Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.988085 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lnlb" event={"ID":"308edb1c-e0d1-4e4f-8452-937ce8fd192f","Type":"ContainerDied","Data":"f0cf814017822e8e8fdd79857c2ffa495fd3ed72a2f50f16e9b8d0b3cba6a3e3"} Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.988108 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lnlb" event={"ID":"308edb1c-e0d1-4e4f-8452-937ce8fd192f","Type":"ContainerStarted","Data":"42628d5c112a442acc565c15d275e0b745e80bd419144b4843424436180ac572"} Jan 23 08:19:01 crc kubenswrapper[4860]: I0123 08:19:01.991585 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dp8mj" event={"ID":"2afd08be-90e1-4877-8ef2-249d867ad2c6","Type":"ContainerStarted","Data":"8a83a2e19be676fb912edd83e70c8e8d8fa9aed176d1d2bbc6eff440c40d2bfa"} Jan 23 08:19:02 crc kubenswrapper[4860]: I0123 08:19:02.034429 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dp8mj" podStartSLOduration=3.48484908 podStartE2EDuration="6.034413884s" podCreationTimestamp="2026-01-23 08:18:56 +0000 UTC" firstStartedPulling="2026-01-23 08:18:58.941529619 +0000 UTC m=+245.569579794" lastFinishedPulling="2026-01-23 08:19:01.491094413 +0000 UTC m=+248.119144598" observedRunningTime="2026-01-23 08:19:02.030884846 +0000 UTC m=+248.658935041" watchObservedRunningTime="2026-01-23 08:19:02.034413884 +0000 UTC m=+248.662464069" Jan 23 08:19:02 crc kubenswrapper[4860]: I0123 08:19:02.048326 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qww5h" podStartSLOduration=2.226557209 podStartE2EDuration="4.048310477s" podCreationTimestamp="2026-01-23 08:18:58 +0000 UTC" firstStartedPulling="2026-01-23 08:18:59.950872639 +0000 UTC m=+246.578922824" lastFinishedPulling="2026-01-23 08:19:01.772625897 +0000 UTC m=+248.400676092" observedRunningTime="2026-01-23 08:19:02.044064232 +0000 UTC m=+248.672114437" watchObservedRunningTime="2026-01-23 08:19:02.048310477 +0000 UTC m=+248.676360662" Jan 23 08:19:02 crc kubenswrapper[4860]: I0123 08:19:02.997811 4860 generic.go:334] "Generic (PLEG): container finished" podID="87d48386-02fa-481d-81a2-0d96e1b4dd5b" containerID="d4e0f7fb4f8ac7838670fa53b7e41fb30f54faf3350043daecf601a2375bc9de" exitCode=0 Jan 23 08:19:02 crc kubenswrapper[4860]: I0123 08:19:02.997873 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ct7q8" event={"ID":"87d48386-02fa-481d-81a2-0d96e1b4dd5b","Type":"ContainerDied","Data":"d4e0f7fb4f8ac7838670fa53b7e41fb30f54faf3350043daecf601a2375bc9de"} Jan 23 08:19:04 crc kubenswrapper[4860]: I0123 08:19:04.002234 4860 generic.go:334] "Generic (PLEG): container finished" podID="308edb1c-e0d1-4e4f-8452-937ce8fd192f" containerID="2523550d4ea25921be1106143b4ea7b79d0bbffc9964e293092a7c65d4b369d6" exitCode=0 Jan 23 08:19:04 crc kubenswrapper[4860]: I0123 08:19:04.002472 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lnlb" event={"ID":"308edb1c-e0d1-4e4f-8452-937ce8fd192f","Type":"ContainerDied","Data":"2523550d4ea25921be1106143b4ea7b79d0bbffc9964e293092a7c65d4b369d6"} Jan 23 08:19:04 crc kubenswrapper[4860]: I0123 08:19:04.006523 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ct7q8" event={"ID":"87d48386-02fa-481d-81a2-0d96e1b4dd5b","Type":"ContainerStarted","Data":"1a8da9ac8b51e8d9dce15895285c9d15320634a0c0a2da348dc61002e6842626"} Jan 23 08:19:05 crc kubenswrapper[4860]: I0123 08:19:05.013917 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lnlb" event={"ID":"308edb1c-e0d1-4e4f-8452-937ce8fd192f","Type":"ContainerStarted","Data":"31af5c9207fb16aa4dbb2a783e70489ab0e7afdbfa35ab986114755552f0e98b"} Jan 23 08:19:05 crc kubenswrapper[4860]: I0123 08:19:05.031224 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8lnlb" podStartSLOduration=1.5823607389999998 podStartE2EDuration="4.03120094s" podCreationTimestamp="2026-01-23 08:19:01 +0000 UTC" firstStartedPulling="2026-01-23 08:19:01.989460612 +0000 UTC m=+248.617510807" lastFinishedPulling="2026-01-23 08:19:04.438300823 +0000 UTC m=+251.066351008" observedRunningTime="2026-01-23 08:19:05.029531099 +0000 UTC m=+251.657581294" watchObservedRunningTime="2026-01-23 08:19:05.03120094 +0000 UTC m=+251.659251125" Jan 23 08:19:05 crc kubenswrapper[4860]: I0123 08:19:05.032738 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ct7q8" podStartSLOduration=3.600874157 podStartE2EDuration="6.032731238s" podCreationTimestamp="2026-01-23 08:18:59 +0000 UTC" firstStartedPulling="2026-01-23 08:19:00.967098969 +0000 UTC m=+247.595149154" lastFinishedPulling="2026-01-23 08:19:03.39895605 +0000 UTC m=+250.027006235" observedRunningTime="2026-01-23 08:19:04.085292769 +0000 UTC m=+250.713342964" watchObservedRunningTime="2026-01-23 08:19:05.032731238 +0000 UTC m=+251.660781423" Jan 23 08:19:07 crc kubenswrapper[4860]: I0123 08:19:07.249312 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dp8mj" Jan 23 08:19:07 crc kubenswrapper[4860]: I0123 08:19:07.249679 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dp8mj" Jan 23 08:19:07 crc kubenswrapper[4860]: I0123 08:19:07.291104 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dp8mj" Jan 23 08:19:08 crc kubenswrapper[4860]: I0123 08:19:08.064855 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dp8mj" Jan 23 08:19:09 crc kubenswrapper[4860]: I0123 08:19:09.108486 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:19:09 crc kubenswrapper[4860]: I0123 08:19:09.108536 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:19:09 crc kubenswrapper[4860]: I0123 08:19:09.167330 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:19:09 crc kubenswrapper[4860]: I0123 08:19:09.639380 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ct7q8" Jan 23 08:19:09 crc kubenswrapper[4860]: I0123 08:19:09.639713 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ct7q8" Jan 23 08:19:09 crc kubenswrapper[4860]: I0123 08:19:09.680227 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ct7q8" Jan 23 08:19:10 crc kubenswrapper[4860]: I0123 08:19:10.108434 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ct7q8" Jan 23 08:19:10 crc kubenswrapper[4860]: I0123 08:19:10.111957 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:19:11 crc kubenswrapper[4860]: I0123 08:19:11.436765 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8lnlb" Jan 23 08:19:11 crc kubenswrapper[4860]: I0123 08:19:11.436814 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8lnlb" Jan 23 08:19:11 crc kubenswrapper[4860]: I0123 08:19:11.488488 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8lnlb" Jan 23 08:19:12 crc kubenswrapper[4860]: I0123 08:19:12.086103 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8lnlb" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.598666 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hsmhr"] Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.599996 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.610827 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hsmhr"] Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.734640 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-bound-sa-token\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.734708 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.734752 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.734839 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.734881 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-trusted-ca\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.734917 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-registry-certificates\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.734970 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2vcp\" (UniqueName: \"kubernetes.io/projected/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-kube-api-access-c2vcp\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.734999 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-registry-tls\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.758825 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.836421 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-trusted-ca\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.836550 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-registry-certificates\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.836613 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2vcp\" (UniqueName: \"kubernetes.io/projected/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-kube-api-access-c2vcp\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.836643 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-registry-tls\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.836673 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-bound-sa-token\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.836699 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.836740 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.837872 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.838274 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-registry-certificates\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.838320 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-trusted-ca\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.849090 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-registry-tls\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.849098 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.855236 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2vcp\" (UniqueName: \"kubernetes.io/projected/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-kube-api-access-c2vcp\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.856484 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c51e59f4-a3a6-441b-bd51-025cd9ba5dc3-bound-sa-token\") pod \"image-registry-66df7c8f76-hsmhr\" (UID: \"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3\") " pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:30 crc kubenswrapper[4860]: I0123 08:19:30.919713 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:31 crc kubenswrapper[4860]: I0123 08:19:31.356494 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hsmhr"] Jan 23 08:19:32 crc kubenswrapper[4860]: I0123 08:19:32.154711 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" event={"ID":"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3","Type":"ContainerStarted","Data":"02c6a27e179820a9dbed5adb071686563d15a6720d34143ed3c5eb2f33e5866b"} Jan 23 08:19:32 crc kubenswrapper[4860]: I0123 08:19:32.155031 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:32 crc kubenswrapper[4860]: I0123 08:19:32.155043 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" event={"ID":"c51e59f4-a3a6-441b-bd51-025cd9ba5dc3","Type":"ContainerStarted","Data":"318ff5f028f3ea2646a4b5ad656934606ba8414d8d68d724fd113c35f16b78f6"} Jan 23 08:19:32 crc kubenswrapper[4860]: I0123 08:19:32.170799 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" podStartSLOduration=2.170784808 podStartE2EDuration="2.170784808s" podCreationTimestamp="2026-01-23 08:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:19:32.169193708 +0000 UTC m=+278.797243913" watchObservedRunningTime="2026-01-23 08:19:32.170784808 +0000 UTC m=+278.798834993" Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.062336 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-574c848897-hgccn"] Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.062856 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-574c848897-hgccn" podUID="c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e" containerName="controller-manager" containerID="cri-o://064049e15f65c04544acb79dfd820e827b9c1c9bafd43811d73f75792d5b57f2" gracePeriod=30 Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.471207 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.620472 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-proxy-ca-bundles\") pod \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.620576 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlbfg\" (UniqueName: \"kubernetes.io/projected/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-kube-api-access-vlbfg\") pod \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.620600 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-client-ca\") pod \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.620651 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-serving-cert\") pod \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.620684 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-config\") pod \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\" (UID: \"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e\") " Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.621274 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e" (UID: "c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.621287 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-client-ca" (OuterVolumeSpecName: "client-ca") pod "c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e" (UID: "c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.621443 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-config" (OuterVolumeSpecName: "config") pod "c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e" (UID: "c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.626636 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-kube-api-access-vlbfg" (OuterVolumeSpecName: "kube-api-access-vlbfg") pod "c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e" (UID: "c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e"). InnerVolumeSpecName "kube-api-access-vlbfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.626668 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e" (UID: "c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.721728 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlbfg\" (UniqueName: \"kubernetes.io/projected/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-kube-api-access-vlbfg\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.721764 4860 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.721774 4860 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.721782 4860 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:37 crc kubenswrapper[4860]: I0123 08:19:37.721792 4860 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.185639 4860 generic.go:334] "Generic (PLEG): container finished" podID="c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e" containerID="064049e15f65c04544acb79dfd820e827b9c1c9bafd43811d73f75792d5b57f2" exitCode=0 Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.185687 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-574c848897-hgccn" event={"ID":"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e","Type":"ContainerDied","Data":"064049e15f65c04544acb79dfd820e827b9c1c9bafd43811d73f75792d5b57f2"} Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.185713 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-574c848897-hgccn" event={"ID":"c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e","Type":"ContainerDied","Data":"18dee3bb9012e33aaef154ba1376a11846b78419aa7243e4ff136092765acef4"} Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.185721 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-574c848897-hgccn" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.185732 4860 scope.go:117] "RemoveContainer" containerID="064049e15f65c04544acb79dfd820e827b9c1c9bafd43811d73f75792d5b57f2" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.204138 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-574c848897-hgccn"] Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.205328 4860 scope.go:117] "RemoveContainer" containerID="064049e15f65c04544acb79dfd820e827b9c1c9bafd43811d73f75792d5b57f2" Jan 23 08:19:38 crc kubenswrapper[4860]: E0123 08:19:38.205774 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064049e15f65c04544acb79dfd820e827b9c1c9bafd43811d73f75792d5b57f2\": container with ID starting with 064049e15f65c04544acb79dfd820e827b9c1c9bafd43811d73f75792d5b57f2 not found: ID does not exist" containerID="064049e15f65c04544acb79dfd820e827b9c1c9bafd43811d73f75792d5b57f2" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.205802 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"064049e15f65c04544acb79dfd820e827b9c1c9bafd43811d73f75792d5b57f2"} err="failed to get container status \"064049e15f65c04544acb79dfd820e827b9c1c9bafd43811d73f75792d5b57f2\": rpc error: code = NotFound desc = could not find container \"064049e15f65c04544acb79dfd820e827b9c1c9bafd43811d73f75792d5b57f2\": container with ID starting with 064049e15f65c04544acb79dfd820e827b9c1c9bafd43811d73f75792d5b57f2 not found: ID does not exist" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.207153 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-574c848897-hgccn"] Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.392287 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-575ccb4647-vdf4s"] Jan 23 08:19:38 crc kubenswrapper[4860]: E0123 08:19:38.392514 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e" containerName="controller-manager" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.392528 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e" containerName="controller-manager" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.392664 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e" containerName="controller-manager" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.393073 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.394899 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.395123 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.395296 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.397767 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.397772 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.397936 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.406092 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.408648 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-575ccb4647-vdf4s"] Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.529918 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce0e0f01-37da-418e-8e70-53df98830a8e-serving-cert\") pod \"controller-manager-575ccb4647-vdf4s\" (UID: \"ce0e0f01-37da-418e-8e70-53df98830a8e\") " pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.529973 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce0e0f01-37da-418e-8e70-53df98830a8e-config\") pod \"controller-manager-575ccb4647-vdf4s\" (UID: \"ce0e0f01-37da-418e-8e70-53df98830a8e\") " pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.530032 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce0e0f01-37da-418e-8e70-53df98830a8e-proxy-ca-bundles\") pod \"controller-manager-575ccb4647-vdf4s\" (UID: \"ce0e0f01-37da-418e-8e70-53df98830a8e\") " pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.530068 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce0e0f01-37da-418e-8e70-53df98830a8e-client-ca\") pod \"controller-manager-575ccb4647-vdf4s\" (UID: \"ce0e0f01-37da-418e-8e70-53df98830a8e\") " pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.530095 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4bfp\" (UniqueName: \"kubernetes.io/projected/ce0e0f01-37da-418e-8e70-53df98830a8e-kube-api-access-f4bfp\") pod \"controller-manager-575ccb4647-vdf4s\" (UID: \"ce0e0f01-37da-418e-8e70-53df98830a8e\") " pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.631294 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce0e0f01-37da-418e-8e70-53df98830a8e-proxy-ca-bundles\") pod \"controller-manager-575ccb4647-vdf4s\" (UID: \"ce0e0f01-37da-418e-8e70-53df98830a8e\") " pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.631340 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce0e0f01-37da-418e-8e70-53df98830a8e-client-ca\") pod \"controller-manager-575ccb4647-vdf4s\" (UID: \"ce0e0f01-37da-418e-8e70-53df98830a8e\") " pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.631358 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4bfp\" (UniqueName: \"kubernetes.io/projected/ce0e0f01-37da-418e-8e70-53df98830a8e-kube-api-access-f4bfp\") pod \"controller-manager-575ccb4647-vdf4s\" (UID: \"ce0e0f01-37da-418e-8e70-53df98830a8e\") " pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.631422 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce0e0f01-37da-418e-8e70-53df98830a8e-serving-cert\") pod \"controller-manager-575ccb4647-vdf4s\" (UID: \"ce0e0f01-37da-418e-8e70-53df98830a8e\") " pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.631442 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce0e0f01-37da-418e-8e70-53df98830a8e-config\") pod \"controller-manager-575ccb4647-vdf4s\" (UID: \"ce0e0f01-37da-418e-8e70-53df98830a8e\") " pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.632623 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce0e0f01-37da-418e-8e70-53df98830a8e-client-ca\") pod \"controller-manager-575ccb4647-vdf4s\" (UID: \"ce0e0f01-37da-418e-8e70-53df98830a8e\") " pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.632741 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce0e0f01-37da-418e-8e70-53df98830a8e-config\") pod \"controller-manager-575ccb4647-vdf4s\" (UID: \"ce0e0f01-37da-418e-8e70-53df98830a8e\") " pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.632859 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce0e0f01-37da-418e-8e70-53df98830a8e-proxy-ca-bundles\") pod \"controller-manager-575ccb4647-vdf4s\" (UID: \"ce0e0f01-37da-418e-8e70-53df98830a8e\") " pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.636088 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce0e0f01-37da-418e-8e70-53df98830a8e-serving-cert\") pod \"controller-manager-575ccb4647-vdf4s\" (UID: \"ce0e0f01-37da-418e-8e70-53df98830a8e\") " pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.649480 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4bfp\" (UniqueName: \"kubernetes.io/projected/ce0e0f01-37da-418e-8e70-53df98830a8e-kube-api-access-f4bfp\") pod \"controller-manager-575ccb4647-vdf4s\" (UID: \"ce0e0f01-37da-418e-8e70-53df98830a8e\") " pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.709829 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:38 crc kubenswrapper[4860]: I0123 08:19:38.914971 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-575ccb4647-vdf4s"] Jan 23 08:19:39 crc kubenswrapper[4860]: I0123 08:19:39.193443 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" event={"ID":"ce0e0f01-37da-418e-8e70-53df98830a8e","Type":"ContainerStarted","Data":"9532a7c5159d3908df72519b10417ddbd8d56c3474b3e0651476bed4d3ea0944"} Jan 23 08:19:39 crc kubenswrapper[4860]: I0123 08:19:39.193863 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:39 crc kubenswrapper[4860]: I0123 08:19:39.193879 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" event={"ID":"ce0e0f01-37da-418e-8e70-53df98830a8e","Type":"ContainerStarted","Data":"dc4447588e07a55c89161ab51faf6f68c5f422153e4958f164af67e9909414a8"} Jan 23 08:19:39 crc kubenswrapper[4860]: I0123 08:19:39.200263 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" Jan 23 08:19:39 crc kubenswrapper[4860]: I0123 08:19:39.212668 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-575ccb4647-vdf4s" podStartSLOduration=2.212652004 podStartE2EDuration="2.212652004s" podCreationTimestamp="2026-01-23 08:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:19:39.210845868 +0000 UTC m=+285.838896063" watchObservedRunningTime="2026-01-23 08:19:39.212652004 +0000 UTC m=+285.840702189" Jan 23 08:19:39 crc kubenswrapper[4860]: I0123 08:19:39.665325 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e" path="/var/lib/kubelet/pods/c0ce396e-bd86-4ae0-98f5-3dd1514b4a2e/volumes" Jan 23 08:19:50 crc kubenswrapper[4860]: I0123 08:19:50.923716 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-hsmhr" Jan 23 08:19:50 crc kubenswrapper[4860]: I0123 08:19:50.977835 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fsxq7"] Jan 23 08:19:53 crc kubenswrapper[4860]: I0123 08:19:53.518175 4860 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.016426 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" podUID="1e27ec47-ec5e-4dd9-8e53-e2c9c8088438" containerName="registry" containerID="cri-o://71cd2ce37f85fad3ba55ecc4402fbe04d024f5e62e91d4c912f376d1e791398f" gracePeriod=30 Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.400659 4860 generic.go:334] "Generic (PLEG): container finished" podID="1e27ec47-ec5e-4dd9-8e53-e2c9c8088438" containerID="71cd2ce37f85fad3ba55ecc4402fbe04d024f5e62e91d4c912f376d1e791398f" exitCode=0 Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.400763 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" event={"ID":"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438","Type":"ContainerDied","Data":"71cd2ce37f85fad3ba55ecc4402fbe04d024f5e62e91d4c912f376d1e791398f"} Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.401152 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" event={"ID":"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438","Type":"ContainerDied","Data":"3f1857537940c37c15e973f555290eadd3e33e2084d8770e6146b6cb4ccc9d8a"} Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.401171 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f1857537940c37c15e973f555290eadd3e33e2084d8770e6146b6cb4ccc9d8a" Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.430759 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.559291 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8hdt\" (UniqueName: \"kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-kube-api-access-c8hdt\") pod \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.559363 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-ca-trust-extracted\") pod \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.559634 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.559679 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-trusted-ca\") pod \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.559734 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-registry-certificates\") pod \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.559767 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-registry-tls\") pod \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.559828 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-installation-pull-secrets\") pod \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.559964 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-bound-sa-token\") pod \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\" (UID: \"1e27ec47-ec5e-4dd9-8e53-e2c9c8088438\") " Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.560549 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.560825 4860 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.560989 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.565334 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.565813 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.565957 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-kube-api-access-c8hdt" (OuterVolumeSpecName: "kube-api-access-c8hdt") pod "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438"). InnerVolumeSpecName "kube-api-access-c8hdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.566041 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.571382 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.574550 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438" (UID: "1e27ec47-ec5e-4dd9-8e53-e2c9c8088438"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.662562 4860 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.662616 4860 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.662625 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8hdt\" (UniqueName: \"kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-kube-api-access-c8hdt\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.662634 4860 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.662643 4860 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:16 crc kubenswrapper[4860]: I0123 08:20:16.662654 4860 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:17 crc kubenswrapper[4860]: I0123 08:20:17.406962 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fsxq7" Jan 23 08:20:17 crc kubenswrapper[4860]: I0123 08:20:17.447050 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fsxq7"] Jan 23 08:20:17 crc kubenswrapper[4860]: I0123 08:20:17.452976 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fsxq7"] Jan 23 08:20:17 crc kubenswrapper[4860]: I0123 08:20:17.664590 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e27ec47-ec5e-4dd9-8e53-e2c9c8088438" path="/var/lib/kubelet/pods/1e27ec47-ec5e-4dd9-8e53-e2c9c8088438/volumes" Jan 23 08:20:56 crc kubenswrapper[4860]: I0123 08:20:56.775632 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:20:56 crc kubenswrapper[4860]: I0123 08:20:56.776141 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:21:27 crc kubenswrapper[4860]: I0123 08:21:27.049890 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:21:27 crc kubenswrapper[4860]: I0123 08:21:27.050656 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:21:54 crc kubenswrapper[4860]: I0123 08:21:54.447422 4860 scope.go:117] "RemoveContainer" containerID="71cd2ce37f85fad3ba55ecc4402fbe04d024f5e62e91d4c912f376d1e791398f" Jan 23 08:21:56 crc kubenswrapper[4860]: I0123 08:21:56.775870 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:21:56 crc kubenswrapper[4860]: I0123 08:21:56.775936 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:21:56 crc kubenswrapper[4860]: I0123 08:21:56.775974 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:21:56 crc kubenswrapper[4860]: I0123 08:21:56.776456 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"789e53bf4116462d0d867afc4faead4f91efb1364fa83cabf4ea344608af1714"} pod="openshift-machine-config-operator/machine-config-daemon-tk8df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:21:56 crc kubenswrapper[4860]: I0123 08:21:56.776693 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" containerID="cri-o://789e53bf4116462d0d867afc4faead4f91efb1364fa83cabf4ea344608af1714" gracePeriod=600 Jan 23 08:21:57 crc kubenswrapper[4860]: I0123 08:21:57.223896 4860 generic.go:334] "Generic (PLEG): container finished" podID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerID="789e53bf4116462d0d867afc4faead4f91efb1364fa83cabf4ea344608af1714" exitCode=0 Jan 23 08:21:57 crc kubenswrapper[4860]: I0123 08:21:57.224095 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerDied","Data":"789e53bf4116462d0d867afc4faead4f91efb1364fa83cabf4ea344608af1714"} Jan 23 08:21:57 crc kubenswrapper[4860]: I0123 08:21:57.224296 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerStarted","Data":"a5f19eb6680810f114123e7b69b3ddca9ec3e33281e2c5954aa55c863d5a13f8"} Jan 23 08:21:57 crc kubenswrapper[4860]: I0123 08:21:57.224323 4860 scope.go:117] "RemoveContainer" containerID="0b3151f70a9f1405c480998d872f2f689c2d81b91f5b9582195941d5aad7277c" Jan 23 08:24:15 crc kubenswrapper[4860]: I0123 08:24:15.891421 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5qv8z"] Jan 23 08:24:15 crc kubenswrapper[4860]: I0123 08:24:15.892376 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="ovn-controller" containerID="cri-o://e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b" gracePeriod=30 Jan 23 08:24:15 crc kubenswrapper[4860]: I0123 08:24:15.892437 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="nbdb" containerID="cri-o://530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2" gracePeriod=30 Jan 23 08:24:15 crc kubenswrapper[4860]: I0123 08:24:15.892522 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="sbdb" containerID="cri-o://ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca" gracePeriod=30 Jan 23 08:24:15 crc kubenswrapper[4860]: I0123 08:24:15.892530 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="ovn-acl-logging" containerID="cri-o://c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9" gracePeriod=30 Jan 23 08:24:15 crc kubenswrapper[4860]: I0123 08:24:15.892563 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="northd" containerID="cri-o://c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778" gracePeriod=30 Jan 23 08:24:15 crc kubenswrapper[4860]: I0123 08:24:15.892619 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="kube-rbac-proxy-node" containerID="cri-o://85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a" gracePeriod=30 Jan 23 08:24:15 crc kubenswrapper[4860]: I0123 08:24:15.892516 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e" gracePeriod=30 Jan 23 08:24:15 crc kubenswrapper[4860]: I0123 08:24:15.928633 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="ovnkube-controller" containerID="cri-o://0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767" gracePeriod=30 Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.356971 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qv8z_d28d122d-d793-4f09-9f3d-00a5a5b93e6b/ovn-acl-logging/0.log" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.359333 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qv8z_d28d122d-d793-4f09-9f3d-00a5a5b93e6b/ovn-controller/0.log" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.359946 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427396 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-cni-netd\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427454 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-systemd\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427474 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-run-ovn-kubernetes\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427507 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427534 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-ovn\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427554 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-log-socket\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427581 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-env-overrides\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427597 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-cni-bin\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427619 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovn-node-metrics-cert\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427641 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovnkube-config\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427664 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-node-log\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427688 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovnkube-script-lib\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427712 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-systemd-units\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427726 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-run-netns\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427747 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-slash\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427766 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89szh\" (UniqueName: \"kubernetes.io/projected/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-kube-api-access-89szh\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427786 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-kubelet\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427801 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-openvswitch\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427837 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-etc-openvswitch\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.427858 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-var-lib-openvswitch\") pod \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\" (UID: \"d28d122d-d793-4f09-9f3d-00a5a5b93e6b\") " Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.428103 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.428145 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.428983 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-node-log" (OuterVolumeSpecName: "node-log") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.429264 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.429293 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.429319 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.429434 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-slash" (OuterVolumeSpecName: "host-slash") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.429442 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.430537 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.430624 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.430679 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.430738 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.431571 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.431654 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.431751 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-log-socket" (OuterVolumeSpecName: "log-socket") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.431799 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.438806 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.448724 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-kube-api-access-89szh" (OuterVolumeSpecName: "kube-api-access-89szh") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "kube-api-access-89szh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.449174 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.453812 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qbbk2"] Jan 23 08:24:16 crc kubenswrapper[4860]: E0123 08:24:16.454535 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.454631 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 08:24:16 crc kubenswrapper[4860]: E0123 08:24:16.454701 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e27ec47-ec5e-4dd9-8e53-e2c9c8088438" containerName="registry" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.454765 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e27ec47-ec5e-4dd9-8e53-e2c9c8088438" containerName="registry" Jan 23 08:24:16 crc kubenswrapper[4860]: E0123 08:24:16.454824 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="ovn-acl-logging" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.454889 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="ovn-acl-logging" Jan 23 08:24:16 crc kubenswrapper[4860]: E0123 08:24:16.454953 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="ovnkube-controller" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.455039 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="ovnkube-controller" Jan 23 08:24:16 crc kubenswrapper[4860]: E0123 08:24:16.455120 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="kube-rbac-proxy-node" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.455193 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="kube-rbac-proxy-node" Jan 23 08:24:16 crc kubenswrapper[4860]: E0123 08:24:16.455259 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="ovn-controller" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.455324 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="ovn-controller" Jan 23 08:24:16 crc kubenswrapper[4860]: E0123 08:24:16.455397 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="sbdb" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.455457 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="sbdb" Jan 23 08:24:16 crc kubenswrapper[4860]: E0123 08:24:16.455518 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="kubecfg-setup" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.455577 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="kubecfg-setup" Jan 23 08:24:16 crc kubenswrapper[4860]: E0123 08:24:16.455630 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="northd" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.455687 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="northd" Jan 23 08:24:16 crc kubenswrapper[4860]: E0123 08:24:16.455760 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="nbdb" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.455869 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="nbdb" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.456064 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.456216 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e27ec47-ec5e-4dd9-8e53-e2c9c8088438" containerName="registry" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.456280 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="northd" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.456341 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="nbdb" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.456402 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="ovnkube-controller" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.456462 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="ovn-controller" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.456521 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="ovn-acl-logging" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.456583 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="kube-rbac-proxy-node" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.456640 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerName="sbdb" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.459698 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.461584 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d28d122d-d793-4f09-9f3d-00a5a5b93e6b" (UID: "d28d122d-d793-4f09-9f3d-00a5a5b93e6b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529088 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-systemd-units\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529155 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-ovn-node-metrics-cert\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529179 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-run-ovn\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529260 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-run-netns\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529310 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8qlg\" (UniqueName: \"kubernetes.io/projected/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-kube-api-access-r8qlg\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529350 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-kubelet\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529369 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-cni-bin\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529391 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-node-log\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529473 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-log-socket\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529528 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-cni-netd\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529548 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-ovnkube-config\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529587 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-etc-openvswitch\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529617 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-var-lib-openvswitch\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529634 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-slash\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529652 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-run-openvswitch\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529686 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-run-systemd\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529712 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-env-overrides\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529783 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-run-ovn-kubernetes\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529814 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-ovnkube-script-lib\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529868 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529928 4860 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529945 4860 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529978 4860 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529989 4860 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.529998 4860 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.530007 4860 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-log-socket\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.530031 4860 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.530040 4860 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.530050 4860 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.530059 4860 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.530068 4860 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-node-log\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.530076 4860 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.530084 4860 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.530092 4860 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.530102 4860 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-slash\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.530111 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89szh\" (UniqueName: \"kubernetes.io/projected/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-kube-api-access-89szh\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.530121 4860 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.530130 4860 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.530138 4860 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.530146 4860 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d28d122d-d793-4f09-9f3d-00a5a5b93e6b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.630798 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-ovn-node-metrics-cert\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.630844 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-run-ovn\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.630901 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-run-netns\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.630990 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-run-ovn\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631044 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-run-netns\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.630931 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qlg\" (UniqueName: \"kubernetes.io/projected/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-kube-api-access-r8qlg\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631139 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-kubelet\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631164 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-cni-bin\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631198 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-node-log\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631213 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-log-socket\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631232 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-cni-netd\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631235 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-kubelet\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631249 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-ovnkube-config\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631280 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-log-socket\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631289 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-etc-openvswitch\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631295 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-cni-bin\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631317 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-etc-openvswitch\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631326 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-cni-netd\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631331 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-var-lib-openvswitch\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631354 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-var-lib-openvswitch\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631374 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-slash\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631391 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-run-openvswitch\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631417 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-run-systemd\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631422 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-run-openvswitch\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631436 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-env-overrides\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631457 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-run-systemd\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631415 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-slash\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631474 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-run-ovn-kubernetes\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631506 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-run-ovn-kubernetes\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631524 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-ovnkube-script-lib\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631381 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-node-log\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631565 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631600 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-systemd-units\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631637 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631679 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-systemd-units\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631922 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-env-overrides\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.631970 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-ovnkube-config\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.632364 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-ovnkube-script-lib\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.633918 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-ovn-node-metrics-cert\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.646216 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8qlg\" (UniqueName: \"kubernetes.io/projected/64bb7c59-ed66-44fe-a1ce-dd10f431fd64-kube-api-access-r8qlg\") pod \"ovnkube-node-qbbk2\" (UID: \"64bb7c59-ed66-44fe-a1ce-dd10f431fd64\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.774610 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.956477 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qv8z_d28d122d-d793-4f09-9f3d-00a5a5b93e6b/ovn-acl-logging/0.log" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.957253 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5qv8z_d28d122d-d793-4f09-9f3d-00a5a5b93e6b/ovn-controller/0.log" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.957709 4860 generic.go:334] "Generic (PLEG): container finished" podID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerID="0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767" exitCode=0 Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.957755 4860 generic.go:334] "Generic (PLEG): container finished" podID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerID="ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca" exitCode=0 Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.957763 4860 generic.go:334] "Generic (PLEG): container finished" podID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerID="530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2" exitCode=0 Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.957770 4860 generic.go:334] "Generic (PLEG): container finished" podID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerID="c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778" exitCode=0 Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.957778 4860 generic.go:334] "Generic (PLEG): container finished" podID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerID="daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e" exitCode=0 Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.957788 4860 generic.go:334] "Generic (PLEG): container finished" podID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerID="85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a" exitCode=0 Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.957794 4860 generic.go:334] "Generic (PLEG): container finished" podID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerID="c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9" exitCode=143 Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.957800 4860 generic.go:334] "Generic (PLEG): container finished" podID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" containerID="e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b" exitCode=143 Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.957806 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerDied","Data":"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.957862 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerDied","Data":"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.957862 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.957892 4860 scope.go:117] "RemoveContainer" containerID="0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.957878 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerDied","Data":"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958006 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerDied","Data":"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958052 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerDied","Data":"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958064 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerDied","Data":"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958077 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958087 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958093 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958100 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerDied","Data":"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958108 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958115 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958120 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958126 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958131 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958136 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958141 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958147 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958152 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958158 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerDied","Data":"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958166 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958172 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958177 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958183 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958189 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958194 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958200 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958206 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958211 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958218 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5qv8z" event={"ID":"d28d122d-d793-4f09-9f3d-00a5a5b93e6b","Type":"ContainerDied","Data":"9114060d576b68de31b6aa3507ad517bbb40df73e4ee846daf5fd33045b88b64"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958226 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958232 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958237 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958243 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958248 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958255 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958260 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958266 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.958271 4860 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.965169 4860 generic.go:334] "Generic (PLEG): container finished" podID="64bb7c59-ed66-44fe-a1ce-dd10f431fd64" containerID="3f311631a536a87e5d76c6538df0bce6a24c9aca0ba4d358fb18b0df5b46084d" exitCode=0 Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.965208 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" event={"ID":"64bb7c59-ed66-44fe-a1ce-dd10f431fd64","Type":"ContainerDied","Data":"3f311631a536a87e5d76c6538df0bce6a24c9aca0ba4d358fb18b0df5b46084d"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.965279 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" event={"ID":"64bb7c59-ed66-44fe-a1ce-dd10f431fd64","Type":"ContainerStarted","Data":"a748ed69cecef15505cd4246acf54814bf937e5fe395b335607b6a897540e162"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.968347 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b55cn_c3224b07-df3e-4f30-9d73-cf34290cfecb/kube-multus/0.log" Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.968515 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b55cn" event={"ID":"c3224b07-df3e-4f30-9d73-cf34290cfecb","Type":"ContainerDied","Data":"cd465c1d884cf5575bcb88a353eab28fd9c3f63fad448d2531bb6e16950d961c"} Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.968421 4860 generic.go:334] "Generic (PLEG): container finished" podID="c3224b07-df3e-4f30-9d73-cf34290cfecb" containerID="cd465c1d884cf5575bcb88a353eab28fd9c3f63fad448d2531bb6e16950d961c" exitCode=2 Jan 23 08:24:16 crc kubenswrapper[4860]: I0123 08:24:16.969009 4860 scope.go:117] "RemoveContainer" containerID="cd465c1d884cf5575bcb88a353eab28fd9c3f63fad448d2531bb6e16950d961c" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.002614 4860 scope.go:117] "RemoveContainer" containerID="ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.016866 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5qv8z"] Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.019847 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5qv8z"] Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.040576 4860 scope.go:117] "RemoveContainer" containerID="530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.064037 4860 scope.go:117] "RemoveContainer" containerID="c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.091120 4860 scope.go:117] "RemoveContainer" containerID="daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.107576 4860 scope.go:117] "RemoveContainer" containerID="85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.124111 4860 scope.go:117] "RemoveContainer" containerID="c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.136247 4860 scope.go:117] "RemoveContainer" containerID="e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.155227 4860 scope.go:117] "RemoveContainer" containerID="916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.169641 4860 scope.go:117] "RemoveContainer" containerID="0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767" Jan 23 08:24:17 crc kubenswrapper[4860]: E0123 08:24:17.170010 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767\": container with ID starting with 0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767 not found: ID does not exist" containerID="0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.170063 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767"} err="failed to get container status \"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767\": rpc error: code = NotFound desc = could not find container \"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767\": container with ID starting with 0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.170085 4860 scope.go:117] "RemoveContainer" containerID="ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca" Jan 23 08:24:17 crc kubenswrapper[4860]: E0123 08:24:17.170324 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca\": container with ID starting with ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca not found: ID does not exist" containerID="ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.170353 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca"} err="failed to get container status \"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca\": rpc error: code = NotFound desc = could not find container \"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca\": container with ID starting with ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.170372 4860 scope.go:117] "RemoveContainer" containerID="530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2" Jan 23 08:24:17 crc kubenswrapper[4860]: E0123 08:24:17.170781 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2\": container with ID starting with 530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2 not found: ID does not exist" containerID="530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.170807 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2"} err="failed to get container status \"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2\": rpc error: code = NotFound desc = could not find container \"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2\": container with ID starting with 530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.170828 4860 scope.go:117] "RemoveContainer" containerID="c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778" Jan 23 08:24:17 crc kubenswrapper[4860]: E0123 08:24:17.171167 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778\": container with ID starting with c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778 not found: ID does not exist" containerID="c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.171221 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778"} err="failed to get container status \"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778\": rpc error: code = NotFound desc = could not find container \"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778\": container with ID starting with c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.171254 4860 scope.go:117] "RemoveContainer" containerID="daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e" Jan 23 08:24:17 crc kubenswrapper[4860]: E0123 08:24:17.171524 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e\": container with ID starting with daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e not found: ID does not exist" containerID="daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.171618 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e"} err="failed to get container status \"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e\": rpc error: code = NotFound desc = could not find container \"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e\": container with ID starting with daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.171645 4860 scope.go:117] "RemoveContainer" containerID="85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a" Jan 23 08:24:17 crc kubenswrapper[4860]: E0123 08:24:17.171898 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a\": container with ID starting with 85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a not found: ID does not exist" containerID="85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.171927 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a"} err="failed to get container status \"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a\": rpc error: code = NotFound desc = could not find container \"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a\": container with ID starting with 85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.171945 4860 scope.go:117] "RemoveContainer" containerID="c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9" Jan 23 08:24:17 crc kubenswrapper[4860]: E0123 08:24:17.172484 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9\": container with ID starting with c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9 not found: ID does not exist" containerID="c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.172507 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9"} err="failed to get container status \"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9\": rpc error: code = NotFound desc = could not find container \"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9\": container with ID starting with c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.172524 4860 scope.go:117] "RemoveContainer" containerID="e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b" Jan 23 08:24:17 crc kubenswrapper[4860]: E0123 08:24:17.172784 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b\": container with ID starting with e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b not found: ID does not exist" containerID="e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.172815 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b"} err="failed to get container status \"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b\": rpc error: code = NotFound desc = could not find container \"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b\": container with ID starting with e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.172834 4860 scope.go:117] "RemoveContainer" containerID="916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9" Jan 23 08:24:17 crc kubenswrapper[4860]: E0123 08:24:17.173124 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9\": container with ID starting with 916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9 not found: ID does not exist" containerID="916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.173164 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9"} err="failed to get container status \"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9\": rpc error: code = NotFound desc = could not find container \"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9\": container with ID starting with 916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.173188 4860 scope.go:117] "RemoveContainer" containerID="0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.173467 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767"} err="failed to get container status \"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767\": rpc error: code = NotFound desc = could not find container \"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767\": container with ID starting with 0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.173493 4860 scope.go:117] "RemoveContainer" containerID="ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.174036 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca"} err="failed to get container status \"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca\": rpc error: code = NotFound desc = could not find container \"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca\": container with ID starting with ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.174061 4860 scope.go:117] "RemoveContainer" containerID="530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.174376 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2"} err="failed to get container status \"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2\": rpc error: code = NotFound desc = could not find container \"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2\": container with ID starting with 530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.174400 4860 scope.go:117] "RemoveContainer" containerID="c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.174625 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778"} err="failed to get container status \"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778\": rpc error: code = NotFound desc = could not find container \"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778\": container with ID starting with c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.174646 4860 scope.go:117] "RemoveContainer" containerID="daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.174859 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e"} err="failed to get container status \"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e\": rpc error: code = NotFound desc = could not find container \"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e\": container with ID starting with daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.174879 4860 scope.go:117] "RemoveContainer" containerID="85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.175129 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a"} err="failed to get container status \"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a\": rpc error: code = NotFound desc = could not find container \"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a\": container with ID starting with 85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.175162 4860 scope.go:117] "RemoveContainer" containerID="c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.175404 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9"} err="failed to get container status \"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9\": rpc error: code = NotFound desc = could not find container \"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9\": container with ID starting with c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.175424 4860 scope.go:117] "RemoveContainer" containerID="e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.175643 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b"} err="failed to get container status \"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b\": rpc error: code = NotFound desc = could not find container \"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b\": container with ID starting with e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.175668 4860 scope.go:117] "RemoveContainer" containerID="916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.175851 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9"} err="failed to get container status \"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9\": rpc error: code = NotFound desc = could not find container \"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9\": container with ID starting with 916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.175880 4860 scope.go:117] "RemoveContainer" containerID="0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.176222 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767"} err="failed to get container status \"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767\": rpc error: code = NotFound desc = could not find container \"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767\": container with ID starting with 0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.176247 4860 scope.go:117] "RemoveContainer" containerID="ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.176462 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca"} err="failed to get container status \"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca\": rpc error: code = NotFound desc = could not find container \"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca\": container with ID starting with ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.176485 4860 scope.go:117] "RemoveContainer" containerID="530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.176683 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2"} err="failed to get container status \"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2\": rpc error: code = NotFound desc = could not find container \"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2\": container with ID starting with 530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.176703 4860 scope.go:117] "RemoveContainer" containerID="c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.176950 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778"} err="failed to get container status \"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778\": rpc error: code = NotFound desc = could not find container \"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778\": container with ID starting with c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.176987 4860 scope.go:117] "RemoveContainer" containerID="daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.177434 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e"} err="failed to get container status \"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e\": rpc error: code = NotFound desc = could not find container \"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e\": container with ID starting with daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.177452 4860 scope.go:117] "RemoveContainer" containerID="85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.177651 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a"} err="failed to get container status \"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a\": rpc error: code = NotFound desc = could not find container \"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a\": container with ID starting with 85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.177676 4860 scope.go:117] "RemoveContainer" containerID="c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.177869 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9"} err="failed to get container status \"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9\": rpc error: code = NotFound desc = could not find container \"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9\": container with ID starting with c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.177888 4860 scope.go:117] "RemoveContainer" containerID="e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.178123 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b"} err="failed to get container status \"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b\": rpc error: code = NotFound desc = could not find container \"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b\": container with ID starting with e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.178147 4860 scope.go:117] "RemoveContainer" containerID="916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.178387 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9"} err="failed to get container status \"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9\": rpc error: code = NotFound desc = could not find container \"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9\": container with ID starting with 916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.178409 4860 scope.go:117] "RemoveContainer" containerID="0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.178601 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767"} err="failed to get container status \"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767\": rpc error: code = NotFound desc = could not find container \"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767\": container with ID starting with 0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.178618 4860 scope.go:117] "RemoveContainer" containerID="ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.178843 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca"} err="failed to get container status \"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca\": rpc error: code = NotFound desc = could not find container \"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca\": container with ID starting with ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.178875 4860 scope.go:117] "RemoveContainer" containerID="530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.179121 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2"} err="failed to get container status \"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2\": rpc error: code = NotFound desc = could not find container \"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2\": container with ID starting with 530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.179140 4860 scope.go:117] "RemoveContainer" containerID="c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.179410 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778"} err="failed to get container status \"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778\": rpc error: code = NotFound desc = could not find container \"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778\": container with ID starting with c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.179436 4860 scope.go:117] "RemoveContainer" containerID="daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.179694 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e"} err="failed to get container status \"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e\": rpc error: code = NotFound desc = could not find container \"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e\": container with ID starting with daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.179721 4860 scope.go:117] "RemoveContainer" containerID="85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.180286 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a"} err="failed to get container status \"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a\": rpc error: code = NotFound desc = could not find container \"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a\": container with ID starting with 85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.180313 4860 scope.go:117] "RemoveContainer" containerID="c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.180571 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9"} err="failed to get container status \"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9\": rpc error: code = NotFound desc = could not find container \"c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9\": container with ID starting with c04063cf5b28baf4f2982592f9a80501e025ba9fe8cb2b5de4eba57317e838b9 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.180599 4860 scope.go:117] "RemoveContainer" containerID="e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.180798 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b"} err="failed to get container status \"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b\": rpc error: code = NotFound desc = could not find container \"e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b\": container with ID starting with e96972ea50c50dac70119f6d238eb3e46f85b185a762670d6abc31346665fd1b not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.180822 4860 scope.go:117] "RemoveContainer" containerID="916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.181040 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9"} err="failed to get container status \"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9\": rpc error: code = NotFound desc = could not find container \"916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9\": container with ID starting with 916a80ed37203af190617067c4a882114c726c89e1eb2605782263f54f92a8c9 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.181061 4860 scope.go:117] "RemoveContainer" containerID="0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.181320 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767"} err="failed to get container status \"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767\": rpc error: code = NotFound desc = could not find container \"0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767\": container with ID starting with 0ee8ba099024d4da516b4ed79ca412bded61c4155ceec6c31f7d4b780142f767 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.181339 4860 scope.go:117] "RemoveContainer" containerID="ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.181588 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca"} err="failed to get container status \"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca\": rpc error: code = NotFound desc = could not find container \"ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca\": container with ID starting with ceeee5e226b68e5948f8acf620721de846bb1f8182831feb96e322143609b5ca not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.181623 4860 scope.go:117] "RemoveContainer" containerID="530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.181863 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2"} err="failed to get container status \"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2\": rpc error: code = NotFound desc = could not find container \"530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2\": container with ID starting with 530bd604e48a251bbf382e1dbe4e47fb557c63e031413621acb0f214d0c7cfe2 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.181892 4860 scope.go:117] "RemoveContainer" containerID="c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.182214 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778"} err="failed to get container status \"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778\": rpc error: code = NotFound desc = could not find container \"c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778\": container with ID starting with c29307debbfe35bdde6eed49e0457f98399fc4f7fa30a0bb305de6f01c504778 not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.182233 4860 scope.go:117] "RemoveContainer" containerID="daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.182432 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e"} err="failed to get container status \"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e\": rpc error: code = NotFound desc = could not find container \"daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e\": container with ID starting with daee978389d431154282dc5810fcabe31bf19bcf027eb39b091a32077dfa7e4e not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.182455 4860 scope.go:117] "RemoveContainer" containerID="85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.182640 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a"} err="failed to get container status \"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a\": rpc error: code = NotFound desc = could not find container \"85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a\": container with ID starting with 85db8898a475307584edeed0693ab541b193ab0146e033148c9ef9c91f0d9a3a not found: ID does not exist" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.664754 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d28d122d-d793-4f09-9f3d-00a5a5b93e6b" path="/var/lib/kubelet/pods/d28d122d-d793-4f09-9f3d-00a5a5b93e6b/volumes" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.977295 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" event={"ID":"64bb7c59-ed66-44fe-a1ce-dd10f431fd64","Type":"ContainerStarted","Data":"a491b6709b4009d0716c0952e49f773e8777cde20063d5be8d830e6ce89945e4"} Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.977597 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" event={"ID":"64bb7c59-ed66-44fe-a1ce-dd10f431fd64","Type":"ContainerStarted","Data":"a205408398cf5e87be3a046dafd678e0c5e7c2bb85d2efaecfcd1b48bec0aac1"} Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.977616 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" event={"ID":"64bb7c59-ed66-44fe-a1ce-dd10f431fd64","Type":"ContainerStarted","Data":"e361a4eaeee7e79a64cb188ccac0e19d1b1347c5528c627d2f48546636c81907"} Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.977628 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" event={"ID":"64bb7c59-ed66-44fe-a1ce-dd10f431fd64","Type":"ContainerStarted","Data":"62953ae2364b305c62a0385e36784f1386f3e4307eb67e6b27758223e5e4a442"} Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.979059 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b55cn_c3224b07-df3e-4f30-9d73-cf34290cfecb/kube-multus/0.log" Jan 23 08:24:17 crc kubenswrapper[4860]: I0123 08:24:17.979130 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b55cn" event={"ID":"c3224b07-df3e-4f30-9d73-cf34290cfecb","Type":"ContainerStarted","Data":"bb94290a847d702caf7f6678efd0a6aa9eeccbc43a766d24481511000a00a62f"} Jan 23 08:24:18 crc kubenswrapper[4860]: I0123 08:24:18.990246 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" event={"ID":"64bb7c59-ed66-44fe-a1ce-dd10f431fd64","Type":"ContainerStarted","Data":"80d768bf6b81e1f9a445290b2eab906514a1c444586e1224ec217fce6a5bff8e"} Jan 23 08:24:18 crc kubenswrapper[4860]: I0123 08:24:18.990287 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" event={"ID":"64bb7c59-ed66-44fe-a1ce-dd10f431fd64","Type":"ContainerStarted","Data":"07c988cb434279bb46c3a779ab406687c2f6c89ce90546b656a28ba4d6b1c163"} Jan 23 08:24:22 crc kubenswrapper[4860]: I0123 08:24:22.012531 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" event={"ID":"64bb7c59-ed66-44fe-a1ce-dd10f431fd64","Type":"ContainerStarted","Data":"968c917ea3c094d6345ee150848c2f161cb18500664b6608ffb23033da85d641"} Jan 23 08:24:25 crc kubenswrapper[4860]: I0123 08:24:25.029743 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" event={"ID":"64bb7c59-ed66-44fe-a1ce-dd10f431fd64","Type":"ContainerStarted","Data":"e1f37de8c0ac531351093f97e8fea5edeb3b769f13dc0df64d196a5e64f8fcf5"} Jan 23 08:24:26 crc kubenswrapper[4860]: I0123 08:24:26.034964 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:26 crc kubenswrapper[4860]: I0123 08:24:26.035044 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:26 crc kubenswrapper[4860]: I0123 08:24:26.035055 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:26 crc kubenswrapper[4860]: I0123 08:24:26.072289 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:26 crc kubenswrapper[4860]: I0123 08:24:26.076238 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" podStartSLOduration=10.076215293 podStartE2EDuration="10.076215293s" podCreationTimestamp="2026-01-23 08:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:24:26.071776766 +0000 UTC m=+572.699826961" watchObservedRunningTime="2026-01-23 08:24:26.076215293 +0000 UTC m=+572.704265468" Jan 23 08:24:26 crc kubenswrapper[4860]: I0123 08:24:26.076858 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:26 crc kubenswrapper[4860]: I0123 08:24:26.775833 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:24:26 crc kubenswrapper[4860]: I0123 08:24:26.775899 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:24:46 crc kubenswrapper[4860]: I0123 08:24:46.794860 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qbbk2" Jan 23 08:24:56 crc kubenswrapper[4860]: I0123 08:24:56.775732 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:24:56 crc kubenswrapper[4860]: I0123 08:24:56.776323 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:25:26 crc kubenswrapper[4860]: I0123 08:25:26.775118 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:25:26 crc kubenswrapper[4860]: I0123 08:25:26.776099 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:25:26 crc kubenswrapper[4860]: I0123 08:25:26.776193 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:25:26 crc kubenswrapper[4860]: I0123 08:25:26.777435 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a5f19eb6680810f114123e7b69b3ddca9ec3e33281e2c5954aa55c863d5a13f8"} pod="openshift-machine-config-operator/machine-config-daemon-tk8df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:25:26 crc kubenswrapper[4860]: I0123 08:25:26.777576 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" containerID="cri-o://a5f19eb6680810f114123e7b69b3ddca9ec3e33281e2c5954aa55c863d5a13f8" gracePeriod=600 Jan 23 08:25:27 crc kubenswrapper[4860]: I0123 08:25:27.002811 4860 generic.go:334] "Generic (PLEG): container finished" podID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerID="a5f19eb6680810f114123e7b69b3ddca9ec3e33281e2c5954aa55c863d5a13f8" exitCode=0 Jan 23 08:25:27 crc kubenswrapper[4860]: I0123 08:25:27.002886 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerDied","Data":"a5f19eb6680810f114123e7b69b3ddca9ec3e33281e2c5954aa55c863d5a13f8"} Jan 23 08:25:27 crc kubenswrapper[4860]: I0123 08:25:27.003006 4860 scope.go:117] "RemoveContainer" containerID="789e53bf4116462d0d867afc4faead4f91efb1364fa83cabf4ea344608af1714" Jan 23 08:25:27 crc kubenswrapper[4860]: I0123 08:25:27.868970 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qww5h"] Jan 23 08:25:27 crc kubenswrapper[4860]: I0123 08:25:27.869466 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qww5h" podUID="f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f" containerName="registry-server" containerID="cri-o://2ed4f5fccd8e0a467ece10ec8578b3d48a6f26f312265e9e99191c4933fc9057" gracePeriod=30 Jan 23 08:25:28 crc kubenswrapper[4860]: I0123 08:25:28.009812 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerStarted","Data":"52a615d6aca47e73053df92f20f2d23afd3eb30795f9436f06952408ac8ca4f6"} Jan 23 08:25:28 crc kubenswrapper[4860]: I0123 08:25:28.012290 4860 generic.go:334] "Generic (PLEG): container finished" podID="f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f" containerID="2ed4f5fccd8e0a467ece10ec8578b3d48a6f26f312265e9e99191c4933fc9057" exitCode=0 Jan 23 08:25:28 crc kubenswrapper[4860]: I0123 08:25:28.012359 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qww5h" event={"ID":"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f","Type":"ContainerDied","Data":"2ed4f5fccd8e0a467ece10ec8578b3d48a6f26f312265e9e99191c4933fc9057"} Jan 23 08:25:28 crc kubenswrapper[4860]: I0123 08:25:28.183389 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:25:28 crc kubenswrapper[4860]: I0123 08:25:28.231052 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tvxs\" (UniqueName: \"kubernetes.io/projected/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-kube-api-access-4tvxs\") pod \"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f\" (UID: \"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f\") " Jan 23 08:25:28 crc kubenswrapper[4860]: I0123 08:25:28.231128 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-catalog-content\") pod \"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f\" (UID: \"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f\") " Jan 23 08:25:28 crc kubenswrapper[4860]: I0123 08:25:28.231269 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-utilities\") pod \"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f\" (UID: \"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f\") " Jan 23 08:25:28 crc kubenswrapper[4860]: I0123 08:25:28.232525 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-utilities" (OuterVolumeSpecName: "utilities") pod "f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f" (UID: "f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:25:28 crc kubenswrapper[4860]: I0123 08:25:28.239456 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-kube-api-access-4tvxs" (OuterVolumeSpecName: "kube-api-access-4tvxs") pod "f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f" (UID: "f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f"). InnerVolumeSpecName "kube-api-access-4tvxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:25:28 crc kubenswrapper[4860]: I0123 08:25:28.255554 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f" (UID: "f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:25:28 crc kubenswrapper[4860]: I0123 08:25:28.332377 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tvxs\" (UniqueName: \"kubernetes.io/projected/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-kube-api-access-4tvxs\") on node \"crc\" DevicePath \"\"" Jan 23 08:25:28 crc kubenswrapper[4860]: I0123 08:25:28.332603 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:25:28 crc kubenswrapper[4860]: I0123 08:25:28.332663 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:25:29 crc kubenswrapper[4860]: I0123 08:25:29.020664 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qww5h" Jan 23 08:25:29 crc kubenswrapper[4860]: I0123 08:25:29.024349 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qww5h" event={"ID":"f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f","Type":"ContainerDied","Data":"53fb266b9bbf35cc9dceb88c8477d070c7bc540719471fb24264a698912b5f8e"} Jan 23 08:25:29 crc kubenswrapper[4860]: I0123 08:25:29.024416 4860 scope.go:117] "RemoveContainer" containerID="2ed4f5fccd8e0a467ece10ec8578b3d48a6f26f312265e9e99191c4933fc9057" Jan 23 08:25:29 crc kubenswrapper[4860]: I0123 08:25:29.040642 4860 scope.go:117] "RemoveContainer" containerID="6aa55b4c49f6930f63971969119e6da2aefa9fdc9fee3bab17aa0145664f9a79" Jan 23 08:25:29 crc kubenswrapper[4860]: I0123 08:25:29.049426 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qww5h"] Jan 23 08:25:29 crc kubenswrapper[4860]: I0123 08:25:29.053113 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qww5h"] Jan 23 08:25:29 crc kubenswrapper[4860]: I0123 08:25:29.060496 4860 scope.go:117] "RemoveContainer" containerID="c8c6b84e42977c45726144f6d6b9fd04704822ebccec28355e9555851e6b49e4" Jan 23 08:25:29 crc kubenswrapper[4860]: I0123 08:25:29.665612 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f" path="/var/lib/kubelet/pods/f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f/volumes" Jan 23 08:25:31 crc kubenswrapper[4860]: I0123 08:25:31.837357 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l"] Jan 23 08:25:31 crc kubenswrapper[4860]: E0123 08:25:31.838758 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f" containerName="extract-utilities" Jan 23 08:25:31 crc kubenswrapper[4860]: I0123 08:25:31.838842 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f" containerName="extract-utilities" Jan 23 08:25:31 crc kubenswrapper[4860]: E0123 08:25:31.838920 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f" containerName="extract-content" Jan 23 08:25:31 crc kubenswrapper[4860]: I0123 08:25:31.839013 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f" containerName="extract-content" Jan 23 08:25:31 crc kubenswrapper[4860]: E0123 08:25:31.839109 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f" containerName="registry-server" Jan 23 08:25:31 crc kubenswrapper[4860]: I0123 08:25:31.839181 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f" containerName="registry-server" Jan 23 08:25:31 crc kubenswrapper[4860]: I0123 08:25:31.839362 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f62e8ec2-3e8f-47b0-80ee-c1d3c3bab24f" containerName="registry-server" Jan 23 08:25:31 crc kubenswrapper[4860]: I0123 08:25:31.840258 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" Jan 23 08:25:31 crc kubenswrapper[4860]: I0123 08:25:31.842532 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l"] Jan 23 08:25:31 crc kubenswrapper[4860]: I0123 08:25:31.845343 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 08:25:31 crc kubenswrapper[4860]: I0123 08:25:31.978590 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7e81962-a190-4650-97f2-c0a40bebebe8-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l\" (UID: \"d7e81962-a190-4650-97f2-c0a40bebebe8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" Jan 23 08:25:31 crc kubenswrapper[4860]: I0123 08:25:31.978653 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6gwj\" (UniqueName: \"kubernetes.io/projected/d7e81962-a190-4650-97f2-c0a40bebebe8-kube-api-access-p6gwj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l\" (UID: \"d7e81962-a190-4650-97f2-c0a40bebebe8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" Jan 23 08:25:31 crc kubenswrapper[4860]: I0123 08:25:31.978684 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7e81962-a190-4650-97f2-c0a40bebebe8-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l\" (UID: \"d7e81962-a190-4650-97f2-c0a40bebebe8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" Jan 23 08:25:32 crc kubenswrapper[4860]: I0123 08:25:32.080002 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7e81962-a190-4650-97f2-c0a40bebebe8-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l\" (UID: \"d7e81962-a190-4650-97f2-c0a40bebebe8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" Jan 23 08:25:32 crc kubenswrapper[4860]: I0123 08:25:32.080074 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6gwj\" (UniqueName: \"kubernetes.io/projected/d7e81962-a190-4650-97f2-c0a40bebebe8-kube-api-access-p6gwj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l\" (UID: \"d7e81962-a190-4650-97f2-c0a40bebebe8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" Jan 23 08:25:32 crc kubenswrapper[4860]: I0123 08:25:32.080103 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7e81962-a190-4650-97f2-c0a40bebebe8-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l\" (UID: \"d7e81962-a190-4650-97f2-c0a40bebebe8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" Jan 23 08:25:32 crc kubenswrapper[4860]: I0123 08:25:32.080583 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7e81962-a190-4650-97f2-c0a40bebebe8-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l\" (UID: \"d7e81962-a190-4650-97f2-c0a40bebebe8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" Jan 23 08:25:32 crc kubenswrapper[4860]: I0123 08:25:32.080609 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7e81962-a190-4650-97f2-c0a40bebebe8-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l\" (UID: \"d7e81962-a190-4650-97f2-c0a40bebebe8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" Jan 23 08:25:32 crc kubenswrapper[4860]: I0123 08:25:32.096964 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6gwj\" (UniqueName: \"kubernetes.io/projected/d7e81962-a190-4650-97f2-c0a40bebebe8-kube-api-access-p6gwj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l\" (UID: \"d7e81962-a190-4650-97f2-c0a40bebebe8\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" Jan 23 08:25:32 crc kubenswrapper[4860]: I0123 08:25:32.157875 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" Jan 23 08:25:32 crc kubenswrapper[4860]: I0123 08:25:32.376142 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l"] Jan 23 08:25:33 crc kubenswrapper[4860]: I0123 08:25:33.039267 4860 generic.go:334] "Generic (PLEG): container finished" podID="d7e81962-a190-4650-97f2-c0a40bebebe8" containerID="d7df63c1b1f8702a3435a0899012e95ff788064d5caf61475698d3b5a242f318" exitCode=0 Jan 23 08:25:33 crc kubenswrapper[4860]: I0123 08:25:33.039325 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" event={"ID":"d7e81962-a190-4650-97f2-c0a40bebebe8","Type":"ContainerDied","Data":"d7df63c1b1f8702a3435a0899012e95ff788064d5caf61475698d3b5a242f318"} Jan 23 08:25:33 crc kubenswrapper[4860]: I0123 08:25:33.039360 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" event={"ID":"d7e81962-a190-4650-97f2-c0a40bebebe8","Type":"ContainerStarted","Data":"0f60713245c504c452a39458d38f460171e2139db440e48bc568bf6a06019dc0"} Jan 23 08:25:33 crc kubenswrapper[4860]: I0123 08:25:33.043005 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:25:35 crc kubenswrapper[4860]: I0123 08:25:35.050614 4860 generic.go:334] "Generic (PLEG): container finished" podID="d7e81962-a190-4650-97f2-c0a40bebebe8" containerID="855f5e8c5e591f251222373ec5031362b0b9ca59e4756119d1f2af2f04cf1971" exitCode=0 Jan 23 08:25:35 crc kubenswrapper[4860]: I0123 08:25:35.050970 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" event={"ID":"d7e81962-a190-4650-97f2-c0a40bebebe8","Type":"ContainerDied","Data":"855f5e8c5e591f251222373ec5031362b0b9ca59e4756119d1f2af2f04cf1971"} Jan 23 08:25:36 crc kubenswrapper[4860]: I0123 08:25:36.062010 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" event={"ID":"d7e81962-a190-4650-97f2-c0a40bebebe8","Type":"ContainerStarted","Data":"074b052032e7e0963987e55c9ed74fde82945585f6dc59dc495857f0b1b1f3c0"} Jan 23 08:25:36 crc kubenswrapper[4860]: I0123 08:25:36.081112 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" podStartSLOduration=3.844206548 podStartE2EDuration="5.08109556s" podCreationTimestamp="2026-01-23 08:25:31 +0000 UTC" firstStartedPulling="2026-01-23 08:25:33.04272355 +0000 UTC m=+639.670773745" lastFinishedPulling="2026-01-23 08:25:34.279612572 +0000 UTC m=+640.907662757" observedRunningTime="2026-01-23 08:25:36.077850715 +0000 UTC m=+642.705900900" watchObservedRunningTime="2026-01-23 08:25:36.08109556 +0000 UTC m=+642.709145745" Jan 23 08:25:37 crc kubenswrapper[4860]: I0123 08:25:37.071269 4860 generic.go:334] "Generic (PLEG): container finished" podID="d7e81962-a190-4650-97f2-c0a40bebebe8" containerID="074b052032e7e0963987e55c9ed74fde82945585f6dc59dc495857f0b1b1f3c0" exitCode=0 Jan 23 08:25:37 crc kubenswrapper[4860]: I0123 08:25:37.071367 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" event={"ID":"d7e81962-a190-4650-97f2-c0a40bebebe8","Type":"ContainerDied","Data":"074b052032e7e0963987e55c9ed74fde82945585f6dc59dc495857f0b1b1f3c0"} Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.331432 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.459537 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7e81962-a190-4650-97f2-c0a40bebebe8-util\") pod \"d7e81962-a190-4650-97f2-c0a40bebebe8\" (UID: \"d7e81962-a190-4650-97f2-c0a40bebebe8\") " Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.459614 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7e81962-a190-4650-97f2-c0a40bebebe8-bundle\") pod \"d7e81962-a190-4650-97f2-c0a40bebebe8\" (UID: \"d7e81962-a190-4650-97f2-c0a40bebebe8\") " Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.459765 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6gwj\" (UniqueName: \"kubernetes.io/projected/d7e81962-a190-4650-97f2-c0a40bebebe8-kube-api-access-p6gwj\") pod \"d7e81962-a190-4650-97f2-c0a40bebebe8\" (UID: \"d7e81962-a190-4650-97f2-c0a40bebebe8\") " Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.462192 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7e81962-a190-4650-97f2-c0a40bebebe8-bundle" (OuterVolumeSpecName: "bundle") pod "d7e81962-a190-4650-97f2-c0a40bebebe8" (UID: "d7e81962-a190-4650-97f2-c0a40bebebe8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.466056 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e81962-a190-4650-97f2-c0a40bebebe8-kube-api-access-p6gwj" (OuterVolumeSpecName: "kube-api-access-p6gwj") pod "d7e81962-a190-4650-97f2-c0a40bebebe8" (UID: "d7e81962-a190-4650-97f2-c0a40bebebe8"). InnerVolumeSpecName "kube-api-access-p6gwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.481554 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7e81962-a190-4650-97f2-c0a40bebebe8-util" (OuterVolumeSpecName: "util") pod "d7e81962-a190-4650-97f2-c0a40bebebe8" (UID: "d7e81962-a190-4650-97f2-c0a40bebebe8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.560937 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6gwj\" (UniqueName: \"kubernetes.io/projected/d7e81962-a190-4650-97f2-c0a40bebebe8-kube-api-access-p6gwj\") on node \"crc\" DevicePath \"\"" Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.560972 4860 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d7e81962-a190-4650-97f2-c0a40bebebe8-util\") on node \"crc\" DevicePath \"\"" Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.560981 4860 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d7e81962-a190-4650-97f2-c0a40bebebe8-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.800689 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd"] Jan 23 08:25:38 crc kubenswrapper[4860]: E0123 08:25:38.801003 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e81962-a190-4650-97f2-c0a40bebebe8" containerName="pull" Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.801039 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e81962-a190-4650-97f2-c0a40bebebe8" containerName="pull" Jan 23 08:25:38 crc kubenswrapper[4860]: E0123 08:25:38.801047 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e81962-a190-4650-97f2-c0a40bebebe8" containerName="util" Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.801055 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e81962-a190-4650-97f2-c0a40bebebe8" containerName="util" Jan 23 08:25:38 crc kubenswrapper[4860]: E0123 08:25:38.801078 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e81962-a190-4650-97f2-c0a40bebebe8" containerName="extract" Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.801085 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e81962-a190-4650-97f2-c0a40bebebe8" containerName="extract" Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.801199 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e81962-a190-4650-97f2-c0a40bebebe8" containerName="extract" Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.801883 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.812002 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd"] Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.964830 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6mhx\" (UniqueName: \"kubernetes.io/projected/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-kube-api-access-c6mhx\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd\" (UID: \"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.964884 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd\" (UID: \"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" Jan 23 08:25:38 crc kubenswrapper[4860]: I0123 08:25:38.964943 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd\" (UID: \"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.066266 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd\" (UID: \"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.066512 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6mhx\" (UniqueName: \"kubernetes.io/projected/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-kube-api-access-c6mhx\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd\" (UID: \"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.066575 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd\" (UID: \"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.066930 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd\" (UID: \"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.066991 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd\" (UID: \"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.082164 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" event={"ID":"d7e81962-a190-4650-97f2-c0a40bebebe8","Type":"ContainerDied","Data":"0f60713245c504c452a39458d38f460171e2139db440e48bc568bf6a06019dc0"} Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.082218 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f60713245c504c452a39458d38f460171e2139db440e48bc568bf6a06019dc0" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.082261 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.084077 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6mhx\" (UniqueName: \"kubernetes.io/projected/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-kube-api-access-c6mhx\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd\" (UID: \"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.124403 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.514323 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd"] Jan 23 08:25:39 crc kubenswrapper[4860]: W0123 08:25:39.519285 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda45c01fc_b998_4a1a_b4ed_5b6aacf9845e.slice/crio-564a16793f5f69211406e686ed9666a31661fa47b5e4088b1d02730eefc74f23 WatchSource:0}: Error finding container 564a16793f5f69211406e686ed9666a31661fa47b5e4088b1d02730eefc74f23: Status 404 returned error can't find the container with id 564a16793f5f69211406e686ed9666a31661fa47b5e4088b1d02730eefc74f23 Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.811391 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c"] Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.812494 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.826201 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c"] Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.877076 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce3097f0-7b8c-44b3-b209-fe060e774f70-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c\" (UID: \"ce3097f0-7b8c-44b3-b209-fe060e774f70\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.877124 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce3097f0-7b8c-44b3-b209-fe060e774f70-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c\" (UID: \"ce3097f0-7b8c-44b3-b209-fe060e774f70\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.877206 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4wvm\" (UniqueName: \"kubernetes.io/projected/ce3097f0-7b8c-44b3-b209-fe060e774f70-kube-api-access-k4wvm\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c\" (UID: \"ce3097f0-7b8c-44b3-b209-fe060e774f70\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.977713 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4wvm\" (UniqueName: \"kubernetes.io/projected/ce3097f0-7b8c-44b3-b209-fe060e774f70-kube-api-access-k4wvm\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c\" (UID: \"ce3097f0-7b8c-44b3-b209-fe060e774f70\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.977787 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce3097f0-7b8c-44b3-b209-fe060e774f70-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c\" (UID: \"ce3097f0-7b8c-44b3-b209-fe060e774f70\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.977812 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce3097f0-7b8c-44b3-b209-fe060e774f70-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c\" (UID: \"ce3097f0-7b8c-44b3-b209-fe060e774f70\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.978252 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce3097f0-7b8c-44b3-b209-fe060e774f70-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c\" (UID: \"ce3097f0-7b8c-44b3-b209-fe060e774f70\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" Jan 23 08:25:39 crc kubenswrapper[4860]: I0123 08:25:39.978341 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce3097f0-7b8c-44b3-b209-fe060e774f70-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c\" (UID: \"ce3097f0-7b8c-44b3-b209-fe060e774f70\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" Jan 23 08:25:40 crc kubenswrapper[4860]: I0123 08:25:40.005089 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4wvm\" (UniqueName: \"kubernetes.io/projected/ce3097f0-7b8c-44b3-b209-fe060e774f70-kube-api-access-k4wvm\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c\" (UID: \"ce3097f0-7b8c-44b3-b209-fe060e774f70\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" Jan 23 08:25:40 crc kubenswrapper[4860]: I0123 08:25:40.087451 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" event={"ID":"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e","Type":"ContainerStarted","Data":"564a16793f5f69211406e686ed9666a31661fa47b5e4088b1d02730eefc74f23"} Jan 23 08:25:40 crc kubenswrapper[4860]: I0123 08:25:40.134077 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" Jan 23 08:25:40 crc kubenswrapper[4860]: I0123 08:25:40.297364 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c"] Jan 23 08:25:40 crc kubenswrapper[4860]: W0123 08:25:40.299059 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce3097f0_7b8c_44b3_b209_fe060e774f70.slice/crio-02e1688109f56c55c8c60f8dc7ea0de685946b583964e6552e05fea3d643082a WatchSource:0}: Error finding container 02e1688109f56c55c8c60f8dc7ea0de685946b583964e6552e05fea3d643082a: Status 404 returned error can't find the container with id 02e1688109f56c55c8c60f8dc7ea0de685946b583964e6552e05fea3d643082a Jan 23 08:25:41 crc kubenswrapper[4860]: I0123 08:25:41.093695 4860 generic.go:334] "Generic (PLEG): container finished" podID="ce3097f0-7b8c-44b3-b209-fe060e774f70" containerID="1fe7ae538a72ba9c8d3ad3d49c237a8420009a33d0305ba706f8d25c255e1baf" exitCode=0 Jan 23 08:25:41 crc kubenswrapper[4860]: I0123 08:25:41.093769 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" event={"ID":"ce3097f0-7b8c-44b3-b209-fe060e774f70","Type":"ContainerDied","Data":"1fe7ae538a72ba9c8d3ad3d49c237a8420009a33d0305ba706f8d25c255e1baf"} Jan 23 08:25:41 crc kubenswrapper[4860]: I0123 08:25:41.093796 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" event={"ID":"ce3097f0-7b8c-44b3-b209-fe060e774f70","Type":"ContainerStarted","Data":"02e1688109f56c55c8c60f8dc7ea0de685946b583964e6552e05fea3d643082a"} Jan 23 08:25:41 crc kubenswrapper[4860]: I0123 08:25:41.095292 4860 generic.go:334] "Generic (PLEG): container finished" podID="a45c01fc-b998-4a1a-b4ed-5b6aacf9845e" containerID="ed3dae8d19fdf9a0daa2b66dc33e0f8717853cf649be66a8350b91b3f266ff63" exitCode=0 Jan 23 08:25:41 crc kubenswrapper[4860]: I0123 08:25:41.095329 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" event={"ID":"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e","Type":"ContainerDied","Data":"ed3dae8d19fdf9a0daa2b66dc33e0f8717853cf649be66a8350b91b3f266ff63"} Jan 23 08:25:43 crc kubenswrapper[4860]: I0123 08:25:43.566912 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9srmg"] Jan 23 08:25:43 crc kubenswrapper[4860]: I0123 08:25:43.568297 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:25:43 crc kubenswrapper[4860]: I0123 08:25:43.621111 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9b5317c-da20-4444-aa23-2a8453b48b2b-utilities\") pod \"certified-operators-9srmg\" (UID: \"c9b5317c-da20-4444-aa23-2a8453b48b2b\") " pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:25:43 crc kubenswrapper[4860]: I0123 08:25:43.621222 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9b5317c-da20-4444-aa23-2a8453b48b2b-catalog-content\") pod \"certified-operators-9srmg\" (UID: \"c9b5317c-da20-4444-aa23-2a8453b48b2b\") " pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:25:43 crc kubenswrapper[4860]: I0123 08:25:43.621245 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wwl5\" (UniqueName: \"kubernetes.io/projected/c9b5317c-da20-4444-aa23-2a8453b48b2b-kube-api-access-4wwl5\") pod \"certified-operators-9srmg\" (UID: \"c9b5317c-da20-4444-aa23-2a8453b48b2b\") " pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:25:43 crc kubenswrapper[4860]: I0123 08:25:43.640325 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9srmg"] Jan 23 08:25:43 crc kubenswrapper[4860]: I0123 08:25:43.721907 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9b5317c-da20-4444-aa23-2a8453b48b2b-catalog-content\") pod \"certified-operators-9srmg\" (UID: \"c9b5317c-da20-4444-aa23-2a8453b48b2b\") " pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:25:43 crc kubenswrapper[4860]: I0123 08:25:43.721955 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wwl5\" (UniqueName: \"kubernetes.io/projected/c9b5317c-da20-4444-aa23-2a8453b48b2b-kube-api-access-4wwl5\") pod \"certified-operators-9srmg\" (UID: \"c9b5317c-da20-4444-aa23-2a8453b48b2b\") " pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:25:43 crc kubenswrapper[4860]: I0123 08:25:43.721991 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9b5317c-da20-4444-aa23-2a8453b48b2b-utilities\") pod \"certified-operators-9srmg\" (UID: \"c9b5317c-da20-4444-aa23-2a8453b48b2b\") " pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:25:43 crc kubenswrapper[4860]: I0123 08:25:43.722466 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9b5317c-da20-4444-aa23-2a8453b48b2b-utilities\") pod \"certified-operators-9srmg\" (UID: \"c9b5317c-da20-4444-aa23-2a8453b48b2b\") " pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:25:43 crc kubenswrapper[4860]: I0123 08:25:43.722545 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9b5317c-da20-4444-aa23-2a8453b48b2b-catalog-content\") pod \"certified-operators-9srmg\" (UID: \"c9b5317c-da20-4444-aa23-2a8453b48b2b\") " pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:25:43 crc kubenswrapper[4860]: I0123 08:25:43.746952 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wwl5\" (UniqueName: \"kubernetes.io/projected/c9b5317c-da20-4444-aa23-2a8453b48b2b-kube-api-access-4wwl5\") pod \"certified-operators-9srmg\" (UID: \"c9b5317c-da20-4444-aa23-2a8453b48b2b\") " pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:25:43 crc kubenswrapper[4860]: I0123 08:25:43.958410 4860 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 08:25:43 crc kubenswrapper[4860]: I0123 08:25:43.975279 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:25:44 crc kubenswrapper[4860]: I0123 08:25:44.111915 4860 generic.go:334] "Generic (PLEG): container finished" podID="a45c01fc-b998-4a1a-b4ed-5b6aacf9845e" containerID="97bb350844d422da52dc952e5dc6f0c7f1808cb4c545e28cac4d67a7eae73125" exitCode=0 Jan 23 08:25:44 crc kubenswrapper[4860]: I0123 08:25:44.111983 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" event={"ID":"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e","Type":"ContainerDied","Data":"97bb350844d422da52dc952e5dc6f0c7f1808cb4c545e28cac4d67a7eae73125"} Jan 23 08:25:44 crc kubenswrapper[4860]: I0123 08:25:44.120611 4860 generic.go:334] "Generic (PLEG): container finished" podID="ce3097f0-7b8c-44b3-b209-fe060e774f70" containerID="4c077dc55cbc7460733d30e5e68e4a7b6defd6beba0bef42d925b99fd2e61e43" exitCode=0 Jan 23 08:25:44 crc kubenswrapper[4860]: I0123 08:25:44.120665 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" event={"ID":"ce3097f0-7b8c-44b3-b209-fe060e774f70","Type":"ContainerDied","Data":"4c077dc55cbc7460733d30e5e68e4a7b6defd6beba0bef42d925b99fd2e61e43"} Jan 23 08:25:44 crc kubenswrapper[4860]: I0123 08:25:44.546500 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9srmg"] Jan 23 08:25:44 crc kubenswrapper[4860]: W0123 08:25:44.560704 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9b5317c_da20_4444_aa23_2a8453b48b2b.slice/crio-793872e15fc7b8b02f2380f4b05453abf022c490b330835e8ea65cc8c2398a6c WatchSource:0}: Error finding container 793872e15fc7b8b02f2380f4b05453abf022c490b330835e8ea65cc8c2398a6c: Status 404 returned error can't find the container with id 793872e15fc7b8b02f2380f4b05453abf022c490b330835e8ea65cc8c2398a6c Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.128937 4860 generic.go:334] "Generic (PLEG): container finished" podID="ce3097f0-7b8c-44b3-b209-fe060e774f70" containerID="ae6ab049828ed6b8070f21a210524e9f7c2d7e96a53808eb0bb96056e3a90624" exitCode=0 Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.128983 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" event={"ID":"ce3097f0-7b8c-44b3-b209-fe060e774f70","Type":"ContainerDied","Data":"ae6ab049828ed6b8070f21a210524e9f7c2d7e96a53808eb0bb96056e3a90624"} Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.130905 4860 generic.go:334] "Generic (PLEG): container finished" podID="c9b5317c-da20-4444-aa23-2a8453b48b2b" containerID="4f974611b2a4d67ec3004352eeacbfeb35b203327dc27b327224565f06d3cbac" exitCode=0 Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.130998 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9srmg" event={"ID":"c9b5317c-da20-4444-aa23-2a8453b48b2b","Type":"ContainerDied","Data":"4f974611b2a4d67ec3004352eeacbfeb35b203327dc27b327224565f06d3cbac"} Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.131082 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9srmg" event={"ID":"c9b5317c-da20-4444-aa23-2a8453b48b2b","Type":"ContainerStarted","Data":"793872e15fc7b8b02f2380f4b05453abf022c490b330835e8ea65cc8c2398a6c"} Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.133293 4860 generic.go:334] "Generic (PLEG): container finished" podID="a45c01fc-b998-4a1a-b4ed-5b6aacf9845e" containerID="58467b2dc54eed9995c15e4389a664ddb55e89436e094a0989260907c3532213" exitCode=0 Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.133353 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" event={"ID":"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e","Type":"ContainerDied","Data":"58467b2dc54eed9995c15e4389a664ddb55e89436e094a0989260907c3532213"} Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.227438 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pn9bb"] Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.228344 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.270836 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tjq9\" (UniqueName: \"kubernetes.io/projected/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-kube-api-access-5tjq9\") pod \"redhat-operators-pn9bb\" (UID: \"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9\") " pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.270931 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-catalog-content\") pod \"redhat-operators-pn9bb\" (UID: \"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9\") " pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.270981 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-utilities\") pod \"redhat-operators-pn9bb\" (UID: \"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9\") " pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.285748 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pn9bb"] Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.372106 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-utilities\") pod \"redhat-operators-pn9bb\" (UID: \"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9\") " pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.372181 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tjq9\" (UniqueName: \"kubernetes.io/projected/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-kube-api-access-5tjq9\") pod \"redhat-operators-pn9bb\" (UID: \"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9\") " pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.372221 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-catalog-content\") pod \"redhat-operators-pn9bb\" (UID: \"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9\") " pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.372728 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-catalog-content\") pod \"redhat-operators-pn9bb\" (UID: \"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9\") " pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.372725 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-utilities\") pod \"redhat-operators-pn9bb\" (UID: \"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9\") " pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.427088 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tjq9\" (UniqueName: \"kubernetes.io/projected/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-kube-api-access-5tjq9\") pod \"redhat-operators-pn9bb\" (UID: \"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9\") " pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:25:45 crc kubenswrapper[4860]: I0123 08:25:45.541501 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.073002 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pn9bb"] Jan 23 08:25:46 crc kubenswrapper[4860]: W0123 08:25:46.080474 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcca80b7c_3bf5_40a4_bad1_1c1595dc05c9.slice/crio-161af8649b1338aca39cfb52aed13c4d13590ef9cca50f4f7602db5f345e506f WatchSource:0}: Error finding container 161af8649b1338aca39cfb52aed13c4d13590ef9cca50f4f7602db5f345e506f: Status 404 returned error can't find the container with id 161af8649b1338aca39cfb52aed13c4d13590ef9cca50f4f7602db5f345e506f Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.158121 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn9bb" event={"ID":"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9","Type":"ContainerStarted","Data":"161af8649b1338aca39cfb52aed13c4d13590ef9cca50f4f7602db5f345e506f"} Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.459625 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.489308 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce3097f0-7b8c-44b3-b209-fe060e774f70-util\") pod \"ce3097f0-7b8c-44b3-b209-fe060e774f70\" (UID: \"ce3097f0-7b8c-44b3-b209-fe060e774f70\") " Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.489443 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4wvm\" (UniqueName: \"kubernetes.io/projected/ce3097f0-7b8c-44b3-b209-fe060e774f70-kube-api-access-k4wvm\") pod \"ce3097f0-7b8c-44b3-b209-fe060e774f70\" (UID: \"ce3097f0-7b8c-44b3-b209-fe060e774f70\") " Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.489498 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce3097f0-7b8c-44b3-b209-fe060e774f70-bundle\") pod \"ce3097f0-7b8c-44b3-b209-fe060e774f70\" (UID: \"ce3097f0-7b8c-44b3-b209-fe060e774f70\") " Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.491527 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce3097f0-7b8c-44b3-b209-fe060e774f70-bundle" (OuterVolumeSpecName: "bundle") pod "ce3097f0-7b8c-44b3-b209-fe060e774f70" (UID: "ce3097f0-7b8c-44b3-b209-fe060e774f70"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.496754 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce3097f0-7b8c-44b3-b209-fe060e774f70-kube-api-access-k4wvm" (OuterVolumeSpecName: "kube-api-access-k4wvm") pod "ce3097f0-7b8c-44b3-b209-fe060e774f70" (UID: "ce3097f0-7b8c-44b3-b209-fe060e774f70"). InnerVolumeSpecName "kube-api-access-k4wvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.549208 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.590438 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-bundle\") pod \"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e\" (UID: \"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e\") " Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.590519 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-util\") pod \"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e\" (UID: \"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e\") " Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.590598 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6mhx\" (UniqueName: \"kubernetes.io/projected/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-kube-api-access-c6mhx\") pod \"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e\" (UID: \"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e\") " Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.590913 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4wvm\" (UniqueName: \"kubernetes.io/projected/ce3097f0-7b8c-44b3-b209-fe060e774f70-kube-api-access-k4wvm\") on node \"crc\" DevicePath \"\"" Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.590930 4860 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce3097f0-7b8c-44b3-b209-fe060e774f70-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.591368 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-bundle" (OuterVolumeSpecName: "bundle") pod "a45c01fc-b998-4a1a-b4ed-5b6aacf9845e" (UID: "a45c01fc-b998-4a1a-b4ed-5b6aacf9845e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.595252 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-kube-api-access-c6mhx" (OuterVolumeSpecName: "kube-api-access-c6mhx") pod "a45c01fc-b998-4a1a-b4ed-5b6aacf9845e" (UID: "a45c01fc-b998-4a1a-b4ed-5b6aacf9845e"). InnerVolumeSpecName "kube-api-access-c6mhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.691623 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6mhx\" (UniqueName: \"kubernetes.io/projected/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-kube-api-access-c6mhx\") on node \"crc\" DevicePath \"\"" Jan 23 08:25:46 crc kubenswrapper[4860]: I0123 08:25:46.691675 4860 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:25:47 crc kubenswrapper[4860]: I0123 08:25:47.163701 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" event={"ID":"ce3097f0-7b8c-44b3-b209-fe060e774f70","Type":"ContainerDied","Data":"02e1688109f56c55c8c60f8dc7ea0de685946b583964e6552e05fea3d643082a"} Jan 23 08:25:47 crc kubenswrapper[4860]: I0123 08:25:47.163737 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02e1688109f56c55c8c60f8dc7ea0de685946b583964e6552e05fea3d643082a" Jan 23 08:25:47 crc kubenswrapper[4860]: I0123 08:25:47.163797 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c" Jan 23 08:25:47 crc kubenswrapper[4860]: I0123 08:25:47.165511 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" event={"ID":"a45c01fc-b998-4a1a-b4ed-5b6aacf9845e","Type":"ContainerDied","Data":"564a16793f5f69211406e686ed9666a31661fa47b5e4088b1d02730eefc74f23"} Jan 23 08:25:47 crc kubenswrapper[4860]: I0123 08:25:47.165534 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="564a16793f5f69211406e686ed9666a31661fa47b5e4088b1d02730eefc74f23" Jan 23 08:25:47 crc kubenswrapper[4860]: I0123 08:25:47.165578 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd" Jan 23 08:25:47 crc kubenswrapper[4860]: I0123 08:25:47.284064 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-util" (OuterVolumeSpecName: "util") pod "a45c01fc-b998-4a1a-b4ed-5b6aacf9845e" (UID: "a45c01fc-b998-4a1a-b4ed-5b6aacf9845e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:25:47 crc kubenswrapper[4860]: I0123 08:25:47.296169 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce3097f0-7b8c-44b3-b209-fe060e774f70-util" (OuterVolumeSpecName: "util") pod "ce3097f0-7b8c-44b3-b209-fe060e774f70" (UID: "ce3097f0-7b8c-44b3-b209-fe060e774f70"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:25:47 crc kubenswrapper[4860]: I0123 08:25:47.299524 4860 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce3097f0-7b8c-44b3-b209-fe060e774f70-util\") on node \"crc\" DevicePath \"\"" Jan 23 08:25:47 crc kubenswrapper[4860]: I0123 08:25:47.299554 4860 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a45c01fc-b998-4a1a-b4ed-5b6aacf9845e-util\") on node \"crc\" DevicePath \"\"" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.036529 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49"] Jan 23 08:25:48 crc kubenswrapper[4860]: E0123 08:25:48.036707 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a45c01fc-b998-4a1a-b4ed-5b6aacf9845e" containerName="util" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.036717 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45c01fc-b998-4a1a-b4ed-5b6aacf9845e" containerName="util" Jan 23 08:25:48 crc kubenswrapper[4860]: E0123 08:25:48.036727 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a45c01fc-b998-4a1a-b4ed-5b6aacf9845e" containerName="extract" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.036732 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45c01fc-b998-4a1a-b4ed-5b6aacf9845e" containerName="extract" Jan 23 08:25:48 crc kubenswrapper[4860]: E0123 08:25:48.036740 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a45c01fc-b998-4a1a-b4ed-5b6aacf9845e" containerName="pull" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.036746 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45c01fc-b998-4a1a-b4ed-5b6aacf9845e" containerName="pull" Jan 23 08:25:48 crc kubenswrapper[4860]: E0123 08:25:48.036756 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3097f0-7b8c-44b3-b209-fe060e774f70" containerName="extract" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.036762 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3097f0-7b8c-44b3-b209-fe060e774f70" containerName="extract" Jan 23 08:25:48 crc kubenswrapper[4860]: E0123 08:25:48.036773 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3097f0-7b8c-44b3-b209-fe060e774f70" containerName="util" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.036779 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3097f0-7b8c-44b3-b209-fe060e774f70" containerName="util" Jan 23 08:25:48 crc kubenswrapper[4860]: E0123 08:25:48.036793 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3097f0-7b8c-44b3-b209-fe060e774f70" containerName="pull" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.036799 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3097f0-7b8c-44b3-b209-fe060e774f70" containerName="pull" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.036887 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="a45c01fc-b998-4a1a-b4ed-5b6aacf9845e" containerName="extract" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.036899 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce3097f0-7b8c-44b3-b209-fe060e774f70" containerName="extract" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.037603 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.041461 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.050332 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49"] Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.108889 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49\" (UID: \"53f57f2b-fcc2-4afb-997e-97a4c3c3e184\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.108931 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw58d\" (UniqueName: \"kubernetes.io/projected/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-kube-api-access-nw58d\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49\" (UID: \"53f57f2b-fcc2-4afb-997e-97a4c3c3e184\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.108986 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49\" (UID: \"53f57f2b-fcc2-4afb-997e-97a4c3c3e184\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.171567 4860 generic.go:334] "Generic (PLEG): container finished" podID="cca80b7c-3bf5-40a4-bad1-1c1595dc05c9" containerID="7be75848e9019582280fb445f7fe54ffbff59b2b7e480dbbcaefd9bc0ca65900" exitCode=0 Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.171777 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn9bb" event={"ID":"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9","Type":"ContainerDied","Data":"7be75848e9019582280fb445f7fe54ffbff59b2b7e480dbbcaefd9bc0ca65900"} Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.209747 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49\" (UID: \"53f57f2b-fcc2-4afb-997e-97a4c3c3e184\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.210070 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw58d\" (UniqueName: \"kubernetes.io/projected/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-kube-api-access-nw58d\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49\" (UID: \"53f57f2b-fcc2-4afb-997e-97a4c3c3e184\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.210191 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49\" (UID: \"53f57f2b-fcc2-4afb-997e-97a4c3c3e184\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.210364 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49\" (UID: \"53f57f2b-fcc2-4afb-997e-97a4c3c3e184\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.210644 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49\" (UID: \"53f57f2b-fcc2-4afb-997e-97a4c3c3e184\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.263624 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw58d\" (UniqueName: \"kubernetes.io/projected/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-kube-api-access-nw58d\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49\" (UID: \"53f57f2b-fcc2-4afb-997e-97a4c3c3e184\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.354962 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" Jan 23 08:25:48 crc kubenswrapper[4860]: I0123 08:25:48.636045 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49"] Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.177708 4860 generic.go:334] "Generic (PLEG): container finished" podID="c9b5317c-da20-4444-aa23-2a8453b48b2b" containerID="96efc11ef7b17028ac8c7ea942d5d3009ad597306c271ddce79ad163342c6958" exitCode=0 Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.177768 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9srmg" event={"ID":"c9b5317c-da20-4444-aa23-2a8453b48b2b","Type":"ContainerDied","Data":"96efc11ef7b17028ac8c7ea942d5d3009ad597306c271ddce79ad163342c6958"} Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.179957 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" event={"ID":"53f57f2b-fcc2-4afb-997e-97a4c3c3e184","Type":"ContainerStarted","Data":"708df532d16b35cf426da50306c9582201a9b6c18eebe6150db09fd01b48df89"} Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.234329 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-68wdp"] Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.234972 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-68wdp" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.240970 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-chqvm" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.240986 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.240976 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.287784 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-68wdp"] Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.324854 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz57f\" (UniqueName: \"kubernetes.io/projected/5bd5ffe6-350f-4c69-bafa-f203a0c1e7bd-kube-api-access-jz57f\") pod \"obo-prometheus-operator-68bc856cb9-68wdp\" (UID: \"5bd5ffe6-350f-4c69-bafa-f203a0c1e7bd\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-68wdp" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.375303 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-qzmzn"] Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.375912 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-qzmzn" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.380674 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.381276 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-lmxcm" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.386801 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-9jckz"] Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.388304 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-9jckz" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.401797 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-qzmzn"] Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.406165 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-9jckz"] Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.427401 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54d658468c-9jckz\" (UID: \"d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-9jckz" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.427508 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1305c249-d5a2-488b-9c2c-2f528c3a0f49-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54d658468c-qzmzn\" (UID: \"1305c249-d5a2-488b-9c2c-2f528c3a0f49\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-qzmzn" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.427550 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz57f\" (UniqueName: \"kubernetes.io/projected/5bd5ffe6-350f-4c69-bafa-f203a0c1e7bd-kube-api-access-jz57f\") pod \"obo-prometheus-operator-68bc856cb9-68wdp\" (UID: \"5bd5ffe6-350f-4c69-bafa-f203a0c1e7bd\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-68wdp" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.427593 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1305c249-d5a2-488b-9c2c-2f528c3a0f49-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54d658468c-qzmzn\" (UID: \"1305c249-d5a2-488b-9c2c-2f528c3a0f49\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-qzmzn" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.427634 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54d658468c-9jckz\" (UID: \"d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-9jckz" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.459632 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz57f\" (UniqueName: \"kubernetes.io/projected/5bd5ffe6-350f-4c69-bafa-f203a0c1e7bd-kube-api-access-jz57f\") pod \"obo-prometheus-operator-68bc856cb9-68wdp\" (UID: \"5bd5ffe6-350f-4c69-bafa-f203a0c1e7bd\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-68wdp" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.529256 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1305c249-d5a2-488b-9c2c-2f528c3a0f49-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54d658468c-qzmzn\" (UID: \"1305c249-d5a2-488b-9c2c-2f528c3a0f49\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-qzmzn" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.529564 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54d658468c-9jckz\" (UID: \"d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-9jckz" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.529709 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54d658468c-9jckz\" (UID: \"d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-9jckz" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.529852 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1305c249-d5a2-488b-9c2c-2f528c3a0f49-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54d658468c-qzmzn\" (UID: \"1305c249-d5a2-488b-9c2c-2f528c3a0f49\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-qzmzn" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.532787 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1305c249-d5a2-488b-9c2c-2f528c3a0f49-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54d658468c-qzmzn\" (UID: \"1305c249-d5a2-488b-9c2c-2f528c3a0f49\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-qzmzn" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.533081 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54d658468c-9jckz\" (UID: \"d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-9jckz" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.538435 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54d658468c-9jckz\" (UID: \"d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-9jckz" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.538831 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1305c249-d5a2-488b-9c2c-2f528c3a0f49-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54d658468c-qzmzn\" (UID: \"1305c249-d5a2-488b-9c2c-2f528c3a0f49\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-qzmzn" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.548562 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-68wdp" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.568136 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-zkz7p"] Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.568928 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-zkz7p" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.579194 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-8k4xl" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.583597 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.595705 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-zkz7p"] Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.631822 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea9793c3-f700-496f-b5b3-330037b6323e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-zkz7p\" (UID: \"ea9793c3-f700-496f-b5b3-330037b6323e\") " pod="openshift-operators/observability-operator-59bdc8b94-zkz7p" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.631883 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crf2p\" (UniqueName: \"kubernetes.io/projected/ea9793c3-f700-496f-b5b3-330037b6323e-kube-api-access-crf2p\") pod \"observability-operator-59bdc8b94-zkz7p\" (UID: \"ea9793c3-f700-496f-b5b3-330037b6323e\") " pod="openshift-operators/observability-operator-59bdc8b94-zkz7p" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.700602 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-qzmzn" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.713686 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-9jckz" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.732778 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea9793c3-f700-496f-b5b3-330037b6323e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-zkz7p\" (UID: \"ea9793c3-f700-496f-b5b3-330037b6323e\") " pod="openshift-operators/observability-operator-59bdc8b94-zkz7p" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.732818 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crf2p\" (UniqueName: \"kubernetes.io/projected/ea9793c3-f700-496f-b5b3-330037b6323e-kube-api-access-crf2p\") pod \"observability-operator-59bdc8b94-zkz7p\" (UID: \"ea9793c3-f700-496f-b5b3-330037b6323e\") " pod="openshift-operators/observability-operator-59bdc8b94-zkz7p" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.746495 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea9793c3-f700-496f-b5b3-330037b6323e-observability-operator-tls\") pod \"observability-operator-59bdc8b94-zkz7p\" (UID: \"ea9793c3-f700-496f-b5b3-330037b6323e\") " pod="openshift-operators/observability-operator-59bdc8b94-zkz7p" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.765944 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crf2p\" (UniqueName: \"kubernetes.io/projected/ea9793c3-f700-496f-b5b3-330037b6323e-kube-api-access-crf2p\") pod \"observability-operator-59bdc8b94-zkz7p\" (UID: \"ea9793c3-f700-496f-b5b3-330037b6323e\") " pod="openshift-operators/observability-operator-59bdc8b94-zkz7p" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.766937 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-d9lkg"] Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.767932 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-d9lkg" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.773571 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-d4hkd" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.794969 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-d9lkg"] Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.842429 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-68wdp"] Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.843052 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/265698c3-44cd-419b-a5db-f35ec2ef8514-openshift-service-ca\") pod \"perses-operator-5bf474d74f-d9lkg\" (UID: \"265698c3-44cd-419b-a5db-f35ec2ef8514\") " pod="openshift-operators/perses-operator-5bf474d74f-d9lkg" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.843104 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tscks\" (UniqueName: \"kubernetes.io/projected/265698c3-44cd-419b-a5db-f35ec2ef8514-kube-api-access-tscks\") pod \"perses-operator-5bf474d74f-d9lkg\" (UID: \"265698c3-44cd-419b-a5db-f35ec2ef8514\") " pod="openshift-operators/perses-operator-5bf474d74f-d9lkg" Jan 23 08:25:49 crc kubenswrapper[4860]: W0123 08:25:49.872399 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bd5ffe6_350f_4c69_bafa_f203a0c1e7bd.slice/crio-c62e693a087d91ad1261755875d48c3836d81d0a2b2c98d1d90aacfb090f01bd WatchSource:0}: Error finding container c62e693a087d91ad1261755875d48c3836d81d0a2b2c98d1d90aacfb090f01bd: Status 404 returned error can't find the container with id c62e693a087d91ad1261755875d48c3836d81d0a2b2c98d1d90aacfb090f01bd Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.919039 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-zkz7p" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.944631 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/265698c3-44cd-419b-a5db-f35ec2ef8514-openshift-service-ca\") pod \"perses-operator-5bf474d74f-d9lkg\" (UID: \"265698c3-44cd-419b-a5db-f35ec2ef8514\") " pod="openshift-operators/perses-operator-5bf474d74f-d9lkg" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.944696 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tscks\" (UniqueName: \"kubernetes.io/projected/265698c3-44cd-419b-a5db-f35ec2ef8514-kube-api-access-tscks\") pod \"perses-operator-5bf474d74f-d9lkg\" (UID: \"265698c3-44cd-419b-a5db-f35ec2ef8514\") " pod="openshift-operators/perses-operator-5bf474d74f-d9lkg" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.945757 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/265698c3-44cd-419b-a5db-f35ec2ef8514-openshift-service-ca\") pod \"perses-operator-5bf474d74f-d9lkg\" (UID: \"265698c3-44cd-419b-a5db-f35ec2ef8514\") " pod="openshift-operators/perses-operator-5bf474d74f-d9lkg" Jan 23 08:25:49 crc kubenswrapper[4860]: I0123 08:25:49.965282 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tscks\" (UniqueName: \"kubernetes.io/projected/265698c3-44cd-419b-a5db-f35ec2ef8514-kube-api-access-tscks\") pod \"perses-operator-5bf474d74f-d9lkg\" (UID: \"265698c3-44cd-419b-a5db-f35ec2ef8514\") " pod="openshift-operators/perses-operator-5bf474d74f-d9lkg" Jan 23 08:25:50 crc kubenswrapper[4860]: I0123 08:25:50.104246 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-d9lkg" Jan 23 08:25:50 crc kubenswrapper[4860]: I0123 08:25:50.114647 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-qzmzn"] Jan 23 08:25:50 crc kubenswrapper[4860]: W0123 08:25:50.136729 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1305c249_d5a2_488b_9c2c_2f528c3a0f49.slice/crio-1589ca8ad25509f75eea3621d96793d44f7a0e4ebe36c049fa095bd130f7ccfc WatchSource:0}: Error finding container 1589ca8ad25509f75eea3621d96793d44f7a0e4ebe36c049fa095bd130f7ccfc: Status 404 returned error can't find the container with id 1589ca8ad25509f75eea3621d96793d44f7a0e4ebe36c049fa095bd130f7ccfc Jan 23 08:25:50 crc kubenswrapper[4860]: I0123 08:25:50.192283 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn9bb" event={"ID":"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9","Type":"ContainerStarted","Data":"3f10710165b154bb6dbab13ad3834d49690bb494a91f33751d34885f968cf98e"} Jan 23 08:25:50 crc kubenswrapper[4860]: I0123 08:25:50.208453 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-qzmzn" event={"ID":"1305c249-d5a2-488b-9c2c-2f528c3a0f49","Type":"ContainerStarted","Data":"1589ca8ad25509f75eea3621d96793d44f7a0e4ebe36c049fa095bd130f7ccfc"} Jan 23 08:25:50 crc kubenswrapper[4860]: I0123 08:25:50.217535 4860 generic.go:334] "Generic (PLEG): container finished" podID="53f57f2b-fcc2-4afb-997e-97a4c3c3e184" containerID="e763cbdc6cbc101273b90a7857c9e6bf5142d56693181ed996811257171ec904" exitCode=0 Jan 23 08:25:50 crc kubenswrapper[4860]: I0123 08:25:50.217620 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" event={"ID":"53f57f2b-fcc2-4afb-997e-97a4c3c3e184","Type":"ContainerDied","Data":"e763cbdc6cbc101273b90a7857c9e6bf5142d56693181ed996811257171ec904"} Jan 23 08:25:50 crc kubenswrapper[4860]: I0123 08:25:50.219000 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-68wdp" event={"ID":"5bd5ffe6-350f-4c69-bafa-f203a0c1e7bd","Type":"ContainerStarted","Data":"c62e693a087d91ad1261755875d48c3836d81d0a2b2c98d1d90aacfb090f01bd"} Jan 23 08:25:50 crc kubenswrapper[4860]: I0123 08:25:50.223773 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-9jckz"] Jan 23 08:25:50 crc kubenswrapper[4860]: I0123 08:25:50.367830 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-d9lkg"] Jan 23 08:25:50 crc kubenswrapper[4860]: I0123 08:25:50.413315 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-zkz7p"] Jan 23 08:25:50 crc kubenswrapper[4860]: W0123 08:25:50.418361 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea9793c3_f700_496f_b5b3_330037b6323e.slice/crio-5666061f46803f72ae9c27d3d43b53be0b31116d67aac955ebfa7f943d9afb4c WatchSource:0}: Error finding container 5666061f46803f72ae9c27d3d43b53be0b31116d67aac955ebfa7f943d9afb4c: Status 404 returned error can't find the container with id 5666061f46803f72ae9c27d3d43b53be0b31116d67aac955ebfa7f943d9afb4c Jan 23 08:25:51 crc kubenswrapper[4860]: I0123 08:25:51.234074 4860 generic.go:334] "Generic (PLEG): container finished" podID="cca80b7c-3bf5-40a4-bad1-1c1595dc05c9" containerID="3f10710165b154bb6dbab13ad3834d49690bb494a91f33751d34885f968cf98e" exitCode=0 Jan 23 08:25:51 crc kubenswrapper[4860]: I0123 08:25:51.234132 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn9bb" event={"ID":"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9","Type":"ContainerDied","Data":"3f10710165b154bb6dbab13ad3834d49690bb494a91f33751d34885f968cf98e"} Jan 23 08:25:51 crc kubenswrapper[4860]: I0123 08:25:51.237213 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-d9lkg" event={"ID":"265698c3-44cd-419b-a5db-f35ec2ef8514","Type":"ContainerStarted","Data":"3afdc761384d07b7e27bfe1649589635a49b80c3b3064c688e77439a55ecee19"} Jan 23 08:25:51 crc kubenswrapper[4860]: I0123 08:25:51.245257 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-zkz7p" event={"ID":"ea9793c3-f700-496f-b5b3-330037b6323e","Type":"ContainerStarted","Data":"5666061f46803f72ae9c27d3d43b53be0b31116d67aac955ebfa7f943d9afb4c"} Jan 23 08:25:51 crc kubenswrapper[4860]: I0123 08:25:51.247104 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-9jckz" event={"ID":"d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b","Type":"ContainerStarted","Data":"660de892ec07ad0a84e4687be56cc6070561a328dee747a5b9f3700fd2c43786"} Jan 23 08:25:51 crc kubenswrapper[4860]: I0123 08:25:51.266820 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9srmg" event={"ID":"c9b5317c-da20-4444-aa23-2a8453b48b2b","Type":"ContainerStarted","Data":"c44188196064ef04c4c4f530a73006cbf5e6b5241c0b7c532cb6025bc917eb64"} Jan 23 08:25:51 crc kubenswrapper[4860]: I0123 08:25:51.288129 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9srmg" podStartSLOduration=3.498316741 podStartE2EDuration="8.288105874s" podCreationTimestamp="2026-01-23 08:25:43 +0000 UTC" firstStartedPulling="2026-01-23 08:25:45.133417772 +0000 UTC m=+651.761467957" lastFinishedPulling="2026-01-23 08:25:49.923206905 +0000 UTC m=+656.551257090" observedRunningTime="2026-01-23 08:25:51.283576254 +0000 UTC m=+657.911626439" watchObservedRunningTime="2026-01-23 08:25:51.288105874 +0000 UTC m=+657.916156059" Jan 23 08:25:53 crc kubenswrapper[4860]: I0123 08:25:53.282380 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn9bb" event={"ID":"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9","Type":"ContainerStarted","Data":"4b701094d12db11f395a4fd4102d981b17900a2c9f1abb6258f073e515659b57"} Jan 23 08:25:53 crc kubenswrapper[4860]: I0123 08:25:53.307085 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pn9bb" podStartSLOduration=3.622834383 podStartE2EDuration="8.307068587s" podCreationTimestamp="2026-01-23 08:25:45 +0000 UTC" firstStartedPulling="2026-01-23 08:25:48.172837457 +0000 UTC m=+654.800887642" lastFinishedPulling="2026-01-23 08:25:52.857071661 +0000 UTC m=+659.485121846" observedRunningTime="2026-01-23 08:25:53.302350203 +0000 UTC m=+659.930400378" watchObservedRunningTime="2026-01-23 08:25:53.307068587 +0000 UTC m=+659.935118772" Jan 23 08:25:53 crc kubenswrapper[4860]: I0123 08:25:53.975650 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:25:53 crc kubenswrapper[4860]: I0123 08:25:53.975724 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:25:54 crc kubenswrapper[4860]: I0123 08:25:54.052381 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:25:55 crc kubenswrapper[4860]: I0123 08:25:55.542330 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:25:55 crc kubenswrapper[4860]: I0123 08:25:55.542410 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.183010 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-79464f474f-dfj8n"] Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.184638 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-79464f474f-dfj8n" Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.187282 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.187495 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-9966n" Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.187672 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.187814 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.212125 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-79464f474f-dfj8n"] Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.346960 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ce5d460-3dfd-47da-9c55-2257049d44fc-webhook-cert\") pod \"elastic-operator-79464f474f-dfj8n\" (UID: \"2ce5d460-3dfd-47da-9c55-2257049d44fc\") " pod="service-telemetry/elastic-operator-79464f474f-dfj8n" Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.347082 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ce5d460-3dfd-47da-9c55-2257049d44fc-apiservice-cert\") pod \"elastic-operator-79464f474f-dfj8n\" (UID: \"2ce5d460-3dfd-47da-9c55-2257049d44fc\") " pod="service-telemetry/elastic-operator-79464f474f-dfj8n" Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.347109 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5qcl\" (UniqueName: \"kubernetes.io/projected/2ce5d460-3dfd-47da-9c55-2257049d44fc-kube-api-access-w5qcl\") pod \"elastic-operator-79464f474f-dfj8n\" (UID: \"2ce5d460-3dfd-47da-9c55-2257049d44fc\") " pod="service-telemetry/elastic-operator-79464f474f-dfj8n" Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.449600 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ce5d460-3dfd-47da-9c55-2257049d44fc-webhook-cert\") pod \"elastic-operator-79464f474f-dfj8n\" (UID: \"2ce5d460-3dfd-47da-9c55-2257049d44fc\") " pod="service-telemetry/elastic-operator-79464f474f-dfj8n" Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.449695 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ce5d460-3dfd-47da-9c55-2257049d44fc-apiservice-cert\") pod \"elastic-operator-79464f474f-dfj8n\" (UID: \"2ce5d460-3dfd-47da-9c55-2257049d44fc\") " pod="service-telemetry/elastic-operator-79464f474f-dfj8n" Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.449719 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5qcl\" (UniqueName: \"kubernetes.io/projected/2ce5d460-3dfd-47da-9c55-2257049d44fc-kube-api-access-w5qcl\") pod \"elastic-operator-79464f474f-dfj8n\" (UID: \"2ce5d460-3dfd-47da-9c55-2257049d44fc\") " pod="service-telemetry/elastic-operator-79464f474f-dfj8n" Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.456906 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ce5d460-3dfd-47da-9c55-2257049d44fc-webhook-cert\") pod \"elastic-operator-79464f474f-dfj8n\" (UID: \"2ce5d460-3dfd-47da-9c55-2257049d44fc\") " pod="service-telemetry/elastic-operator-79464f474f-dfj8n" Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.474450 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ce5d460-3dfd-47da-9c55-2257049d44fc-apiservice-cert\") pod \"elastic-operator-79464f474f-dfj8n\" (UID: \"2ce5d460-3dfd-47da-9c55-2257049d44fc\") " pod="service-telemetry/elastic-operator-79464f474f-dfj8n" Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.478286 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5qcl\" (UniqueName: \"kubernetes.io/projected/2ce5d460-3dfd-47da-9c55-2257049d44fc-kube-api-access-w5qcl\") pod \"elastic-operator-79464f474f-dfj8n\" (UID: \"2ce5d460-3dfd-47da-9c55-2257049d44fc\") " pod="service-telemetry/elastic-operator-79464f474f-dfj8n" Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.503403 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-79464f474f-dfj8n" Jan 23 08:25:56 crc kubenswrapper[4860]: I0123 08:25:56.594706 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pn9bb" podUID="cca80b7c-3bf5-40a4-bad1-1c1595dc05c9" containerName="registry-server" probeResult="failure" output=< Jan 23 08:25:56 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 23 08:25:56 crc kubenswrapper[4860]: > Jan 23 08:26:00 crc kubenswrapper[4860]: I0123 08:26:00.296369 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-8ch5j"] Jan 23 08:26:00 crc kubenswrapper[4860]: I0123 08:26:00.297289 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-8ch5j" Jan 23 08:26:00 crc kubenswrapper[4860]: I0123 08:26:00.299313 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-n52zl" Jan 23 08:26:00 crc kubenswrapper[4860]: I0123 08:26:00.308882 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-8ch5j"] Jan 23 08:26:00 crc kubenswrapper[4860]: I0123 08:26:00.494707 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b7n7\" (UniqueName: \"kubernetes.io/projected/aed90fa7-5ace-46a6-b8a1-8de00e70331e-kube-api-access-5b7n7\") pod \"interconnect-operator-5bb49f789d-8ch5j\" (UID: \"aed90fa7-5ace-46a6-b8a1-8de00e70331e\") " pod="service-telemetry/interconnect-operator-5bb49f789d-8ch5j" Jan 23 08:26:00 crc kubenswrapper[4860]: I0123 08:26:00.596142 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b7n7\" (UniqueName: \"kubernetes.io/projected/aed90fa7-5ace-46a6-b8a1-8de00e70331e-kube-api-access-5b7n7\") pod \"interconnect-operator-5bb49f789d-8ch5j\" (UID: \"aed90fa7-5ace-46a6-b8a1-8de00e70331e\") " pod="service-telemetry/interconnect-operator-5bb49f789d-8ch5j" Jan 23 08:26:00 crc kubenswrapper[4860]: I0123 08:26:00.621553 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b7n7\" (UniqueName: \"kubernetes.io/projected/aed90fa7-5ace-46a6-b8a1-8de00e70331e-kube-api-access-5b7n7\") pod \"interconnect-operator-5bb49f789d-8ch5j\" (UID: \"aed90fa7-5ace-46a6-b8a1-8de00e70331e\") " pod="service-telemetry/interconnect-operator-5bb49f789d-8ch5j" Jan 23 08:26:00 crc kubenswrapper[4860]: I0123 08:26:00.916610 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-8ch5j" Jan 23 08:26:04 crc kubenswrapper[4860]: I0123 08:26:04.018285 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:26:05 crc kubenswrapper[4860]: I0123 08:26:05.584855 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:26:05 crc kubenswrapper[4860]: I0123 08:26:05.629837 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:26:06 crc kubenswrapper[4860]: E0123 08:26:06.212553 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Jan 23 08:26:06 crc kubenswrapper[4860]: E0123 08:26:06.212776 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jz57f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-68wdp_openshift-operators(5bd5ffe6-350f-4c69-bafa-f203a0c1e7bd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 08:26:06 crc kubenswrapper[4860]: E0123 08:26:06.214033 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-68wdp" podUID="5bd5ffe6-350f-4c69-bafa-f203a0c1e7bd" Jan 23 08:26:06 crc kubenswrapper[4860]: E0123 08:26:06.378753 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-68wdp" podUID="5bd5ffe6-350f-4c69-bafa-f203a0c1e7bd" Jan 23 08:26:07 crc kubenswrapper[4860]: I0123 08:26:07.155796 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9srmg"] Jan 23 08:26:07 crc kubenswrapper[4860]: I0123 08:26:07.156711 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9srmg" podUID="c9b5317c-da20-4444-aa23-2a8453b48b2b" containerName="registry-server" containerID="cri-o://c44188196064ef04c4c4f530a73006cbf5e6b5241c0b7c532cb6025bc917eb64" gracePeriod=2 Jan 23 08:26:07 crc kubenswrapper[4860]: I0123 08:26:07.383522 4860 generic.go:334] "Generic (PLEG): container finished" podID="c9b5317c-da20-4444-aa23-2a8453b48b2b" containerID="c44188196064ef04c4c4f530a73006cbf5e6b5241c0b7c532cb6025bc917eb64" exitCode=0 Jan 23 08:26:07 crc kubenswrapper[4860]: I0123 08:26:07.383608 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9srmg" event={"ID":"c9b5317c-da20-4444-aa23-2a8453b48b2b","Type":"ContainerDied","Data":"c44188196064ef04c4c4f530a73006cbf5e6b5241c0b7c532cb6025bc917eb64"} Jan 23 08:26:09 crc kubenswrapper[4860]: I0123 08:26:09.956029 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pn9bb"] Jan 23 08:26:09 crc kubenswrapper[4860]: I0123 08:26:09.956270 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pn9bb" podUID="cca80b7c-3bf5-40a4-bad1-1c1595dc05c9" containerName="registry-server" containerID="cri-o://4b701094d12db11f395a4fd4102d981b17900a2c9f1abb6258f073e515659b57" gracePeriod=2 Jan 23 08:26:10 crc kubenswrapper[4860]: I0123 08:26:10.406602 4860 generic.go:334] "Generic (PLEG): container finished" podID="cca80b7c-3bf5-40a4-bad1-1c1595dc05c9" containerID="4b701094d12db11f395a4fd4102d981b17900a2c9f1abb6258f073e515659b57" exitCode=0 Jan 23 08:26:10 crc kubenswrapper[4860]: I0123 08:26:10.406643 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn9bb" event={"ID":"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9","Type":"ContainerDied","Data":"4b701094d12db11f395a4fd4102d981b17900a2c9f1abb6258f073e515659b57"} Jan 23 08:26:11 crc kubenswrapper[4860]: E0123 08:26:11.680788 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" Jan 23 08:26:11 crc kubenswrapper[4860]: E0123 08:26:11.681300 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tscks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5bf474d74f-d9lkg_openshift-operators(265698c3-44cd-419b-a5db-f35ec2ef8514): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 08:26:11 crc kubenswrapper[4860]: E0123 08:26:11.682467 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5bf474d74f-d9lkg" podUID="265698c3-44cd-419b-a5db-f35ec2ef8514" Jan 23 08:26:12 crc kubenswrapper[4860]: E0123 08:26:12.418456 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8\\\"\"" pod="openshift-operators/perses-operator-5bf474d74f-d9lkg" podUID="265698c3-44cd-419b-a5db-f35ec2ef8514" Jan 23 08:26:12 crc kubenswrapper[4860]: E0123 08:26:12.441705 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" Jan 23 08:26:12 crc kubenswrapper[4860]: E0123 08:26:12.441904 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-54d658468c-qzmzn_openshift-operators(1305c249-d5a2-488b-9c2c-2f528c3a0f49): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 08:26:12 crc kubenswrapper[4860]: E0123 08:26:12.443096 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-qzmzn" podUID="1305c249-d5a2-488b-9c2c-2f528c3a0f49" Jan 23 08:26:12 crc kubenswrapper[4860]: E0123 08:26:12.521566 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" Jan 23 08:26:12 crc kubenswrapper[4860]: E0123 08:26:12.521758 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-54d658468c-9jckz_openshift-operators(d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 08:26:12 crc kubenswrapper[4860]: E0123 08:26:12.523766 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-9jckz" podUID="d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b" Jan 23 08:26:12 crc kubenswrapper[4860]: E0123 08:26:12.579304 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:acaaea813059d4ac5b2618395bd9113f72ada0a33aaaba91aa94f000e77df407" Jan 23 08:26:12 crc kubenswrapper[4860]: E0123 08:26:12.579819 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:acaaea813059d4ac5b2618395bd9113f72ada0a33aaaba91aa94f000e77df407,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nw58d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49_openshift-marketplace(53f57f2b-fcc2-4afb-997e-97a4c3c3e184): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:26:12 crc kubenswrapper[4860]: E0123 08:26:12.581420 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" podUID="53f57f2b-fcc2-4afb-997e-97a4c3c3e184" Jan 23 08:26:12 crc kubenswrapper[4860]: I0123 08:26:12.716713 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:26:12 crc kubenswrapper[4860]: I0123 08:26:12.885260 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wwl5\" (UniqueName: \"kubernetes.io/projected/c9b5317c-da20-4444-aa23-2a8453b48b2b-kube-api-access-4wwl5\") pod \"c9b5317c-da20-4444-aa23-2a8453b48b2b\" (UID: \"c9b5317c-da20-4444-aa23-2a8453b48b2b\") " Jan 23 08:26:12 crc kubenswrapper[4860]: I0123 08:26:12.885586 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9b5317c-da20-4444-aa23-2a8453b48b2b-utilities\") pod \"c9b5317c-da20-4444-aa23-2a8453b48b2b\" (UID: \"c9b5317c-da20-4444-aa23-2a8453b48b2b\") " Jan 23 08:26:12 crc kubenswrapper[4860]: I0123 08:26:12.885708 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9b5317c-da20-4444-aa23-2a8453b48b2b-catalog-content\") pod \"c9b5317c-da20-4444-aa23-2a8453b48b2b\" (UID: \"c9b5317c-da20-4444-aa23-2a8453b48b2b\") " Jan 23 08:26:12 crc kubenswrapper[4860]: I0123 08:26:12.886409 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9b5317c-da20-4444-aa23-2a8453b48b2b-utilities" (OuterVolumeSpecName: "utilities") pod "c9b5317c-da20-4444-aa23-2a8453b48b2b" (UID: "c9b5317c-da20-4444-aa23-2a8453b48b2b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:26:12 crc kubenswrapper[4860]: I0123 08:26:12.903181 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b5317c-da20-4444-aa23-2a8453b48b2b-kube-api-access-4wwl5" (OuterVolumeSpecName: "kube-api-access-4wwl5") pod "c9b5317c-da20-4444-aa23-2a8453b48b2b" (UID: "c9b5317c-da20-4444-aa23-2a8453b48b2b"). InnerVolumeSpecName "kube-api-access-4wwl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:26:12 crc kubenswrapper[4860]: I0123 08:26:12.916185 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:26:12 crc kubenswrapper[4860]: I0123 08:26:12.972540 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9b5317c-da20-4444-aa23-2a8453b48b2b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9b5317c-da20-4444-aa23-2a8453b48b2b" (UID: "c9b5317c-da20-4444-aa23-2a8453b48b2b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:26:12 crc kubenswrapper[4860]: I0123 08:26:12.986698 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wwl5\" (UniqueName: \"kubernetes.io/projected/c9b5317c-da20-4444-aa23-2a8453b48b2b-kube-api-access-4wwl5\") on node \"crc\" DevicePath \"\"" Jan 23 08:26:12 crc kubenswrapper[4860]: I0123 08:26:12.986755 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9b5317c-da20-4444-aa23-2a8453b48b2b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:26:12 crc kubenswrapper[4860]: I0123 08:26:12.986770 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9b5317c-da20-4444-aa23-2a8453b48b2b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.087615 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tjq9\" (UniqueName: \"kubernetes.io/projected/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-kube-api-access-5tjq9\") pod \"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9\" (UID: \"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9\") " Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.087757 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-catalog-content\") pod \"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9\" (UID: \"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9\") " Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.087778 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-utilities\") pod \"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9\" (UID: \"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9\") " Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.089037 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-utilities" (OuterVolumeSpecName: "utilities") pod "cca80b7c-3bf5-40a4-bad1-1c1595dc05c9" (UID: "cca80b7c-3bf5-40a4-bad1-1c1595dc05c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.091831 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-kube-api-access-5tjq9" (OuterVolumeSpecName: "kube-api-access-5tjq9") pod "cca80b7c-3bf5-40a4-bad1-1c1595dc05c9" (UID: "cca80b7c-3bf5-40a4-bad1-1c1595dc05c9"). InnerVolumeSpecName "kube-api-access-5tjq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.153567 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-79464f474f-dfj8n"] Jan 23 08:26:13 crc kubenswrapper[4860]: W0123 08:26:13.168191 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ce5d460_3dfd_47da_9c55_2257049d44fc.slice/crio-2b33ea8bf1fd4345e9c14737e89c58b1b0f12593da39c78a444fff7b660af0bc WatchSource:0}: Error finding container 2b33ea8bf1fd4345e9c14737e89c58b1b0f12593da39c78a444fff7b660af0bc: Status 404 returned error can't find the container with id 2b33ea8bf1fd4345e9c14737e89c58b1b0f12593da39c78a444fff7b660af0bc Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.188708 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tjq9\" (UniqueName: \"kubernetes.io/projected/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-kube-api-access-5tjq9\") on node \"crc\" DevicePath \"\"" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.188980 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.202878 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cca80b7c-3bf5-40a4-bad1-1c1595dc05c9" (UID: "cca80b7c-3bf5-40a4-bad1-1c1595dc05c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.243666 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-8ch5j"] Jan 23 08:26:13 crc kubenswrapper[4860]: W0123 08:26:13.249373 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaed90fa7_5ace_46a6_b8a1_8de00e70331e.slice/crio-2d3c83b6cfe39575fa71c2f6835f5994de2b8f44d95ceb48a14ee20ce37bba24 WatchSource:0}: Error finding container 2d3c83b6cfe39575fa71c2f6835f5994de2b8f44d95ceb48a14ee20ce37bba24: Status 404 returned error can't find the container with id 2d3c83b6cfe39575fa71c2f6835f5994de2b8f44d95ceb48a14ee20ce37bba24 Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.290565 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.424683 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pn9bb" event={"ID":"cca80b7c-3bf5-40a4-bad1-1c1595dc05c9","Type":"ContainerDied","Data":"161af8649b1338aca39cfb52aed13c4d13590ef9cca50f4f7602db5f345e506f"} Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.424714 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pn9bb" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.424745 4860 scope.go:117] "RemoveContainer" containerID="4b701094d12db11f395a4fd4102d981b17900a2c9f1abb6258f073e515659b57" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.425779 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-8ch5j" event={"ID":"aed90fa7-5ace-46a6-b8a1-8de00e70331e","Type":"ContainerStarted","Data":"2d3c83b6cfe39575fa71c2f6835f5994de2b8f44d95ceb48a14ee20ce37bba24"} Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.430859 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9srmg" event={"ID":"c9b5317c-da20-4444-aa23-2a8453b48b2b","Type":"ContainerDied","Data":"793872e15fc7b8b02f2380f4b05453abf022c490b330835e8ea65cc8c2398a6c"} Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.430967 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9srmg" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.436780 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-zkz7p" event={"ID":"ea9793c3-f700-496f-b5b3-330037b6323e","Type":"ContainerStarted","Data":"57f3e3f8296a62820f029fda9af3fa37edfbd082da1321b371bdce7bb78ce729"} Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.438736 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-zkz7p" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.440699 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-zkz7p" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.442658 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-79464f474f-dfj8n" event={"ID":"2ce5d460-3dfd-47da-9c55-2257049d44fc","Type":"ContainerStarted","Data":"2b33ea8bf1fd4345e9c14737e89c58b1b0f12593da39c78a444fff7b660af0bc"} Jan 23 08:26:13 crc kubenswrapper[4860]: E0123 08:26:13.443441 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:acaaea813059d4ac5b2618395bd9113f72ada0a33aaaba91aa94f000e77df407\\\"\"" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" podUID="53f57f2b-fcc2-4afb-997e-97a4c3c3e184" Jan 23 08:26:13 crc kubenswrapper[4860]: E0123 08:26:13.443787 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-qzmzn" podUID="1305c249-d5a2-488b-9c2c-2f528c3a0f49" Jan 23 08:26:13 crc kubenswrapper[4860]: E0123 08:26:13.445794 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-9jckz" podUID="d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.454549 4860 scope.go:117] "RemoveContainer" containerID="3f10710165b154bb6dbab13ad3834d49690bb494a91f33751d34885f968cf98e" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.473740 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-zkz7p" podStartSLOduration=2.284739514 podStartE2EDuration="24.473706305s" podCreationTimestamp="2026-01-23 08:25:49 +0000 UTC" firstStartedPulling="2026-01-23 08:25:50.420650227 +0000 UTC m=+657.048700422" lastFinishedPulling="2026-01-23 08:26:12.609617028 +0000 UTC m=+679.237667213" observedRunningTime="2026-01-23 08:26:13.467586287 +0000 UTC m=+680.095636482" watchObservedRunningTime="2026-01-23 08:26:13.473706305 +0000 UTC m=+680.101756490" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.495333 4860 scope.go:117] "RemoveContainer" containerID="7be75848e9019582280fb445f7fe54ffbff59b2b7e480dbbcaefd9bc0ca65900" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.517262 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9srmg"] Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.521348 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9srmg"] Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.523449 4860 scope.go:117] "RemoveContainer" containerID="c44188196064ef04c4c4f530a73006cbf5e6b5241c0b7c532cb6025bc917eb64" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.541324 4860 scope.go:117] "RemoveContainer" containerID="96efc11ef7b17028ac8c7ea942d5d3009ad597306c271ddce79ad163342c6958" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.562849 4860 scope.go:117] "RemoveContainer" containerID="4f974611b2a4d67ec3004352eeacbfeb35b203327dc27b327224565f06d3cbac" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.581363 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pn9bb"] Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.584168 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pn9bb"] Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.664160 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b5317c-da20-4444-aa23-2a8453b48b2b" path="/var/lib/kubelet/pods/c9b5317c-da20-4444-aa23-2a8453b48b2b/volumes" Jan 23 08:26:13 crc kubenswrapper[4860]: I0123 08:26:13.664747 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cca80b7c-3bf5-40a4-bad1-1c1595dc05c9" path="/var/lib/kubelet/pods/cca80b7c-3bf5-40a4-bad1-1c1595dc05c9/volumes" Jan 23 08:26:17 crc kubenswrapper[4860]: I0123 08:26:17.534139 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-79464f474f-dfj8n" event={"ID":"2ce5d460-3dfd-47da-9c55-2257049d44fc","Type":"ContainerStarted","Data":"45247e4634c19a36d444c77e7080deef2c20eba967a224133763bb4ee4dd8f27"} Jan 23 08:26:17 crc kubenswrapper[4860]: I0123 08:26:17.553568 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-79464f474f-dfj8n" podStartSLOduration=18.115761088 podStartE2EDuration="21.553549363s" podCreationTimestamp="2026-01-23 08:25:56 +0000 UTC" firstStartedPulling="2026-01-23 08:26:13.170183095 +0000 UTC m=+679.798233280" lastFinishedPulling="2026-01-23 08:26:16.60797137 +0000 UTC m=+683.236021555" observedRunningTime="2026-01-23 08:26:17.550667263 +0000 UTC m=+684.178717458" watchObservedRunningTime="2026-01-23 08:26:17.553549363 +0000 UTC m=+684.181599548" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.286894 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 23 08:26:18 crc kubenswrapper[4860]: E0123 08:26:18.287533 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cca80b7c-3bf5-40a4-bad1-1c1595dc05c9" containerName="registry-server" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.287623 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cca80b7c-3bf5-40a4-bad1-1c1595dc05c9" containerName="registry-server" Jan 23 08:26:18 crc kubenswrapper[4860]: E0123 08:26:18.287709 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cca80b7c-3bf5-40a4-bad1-1c1595dc05c9" containerName="extract-content" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.287795 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cca80b7c-3bf5-40a4-bad1-1c1595dc05c9" containerName="extract-content" Jan 23 08:26:18 crc kubenswrapper[4860]: E0123 08:26:18.287867 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b5317c-da20-4444-aa23-2a8453b48b2b" containerName="extract-utilities" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.287934 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b5317c-da20-4444-aa23-2a8453b48b2b" containerName="extract-utilities" Jan 23 08:26:18 crc kubenswrapper[4860]: E0123 08:26:18.288002 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cca80b7c-3bf5-40a4-bad1-1c1595dc05c9" containerName="extract-utilities" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.288092 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cca80b7c-3bf5-40a4-bad1-1c1595dc05c9" containerName="extract-utilities" Jan 23 08:26:18 crc kubenswrapper[4860]: E0123 08:26:18.288165 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b5317c-da20-4444-aa23-2a8453b48b2b" containerName="registry-server" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.288236 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b5317c-da20-4444-aa23-2a8453b48b2b" containerName="registry-server" Jan 23 08:26:18 crc kubenswrapper[4860]: E0123 08:26:18.288310 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b5317c-da20-4444-aa23-2a8453b48b2b" containerName="extract-content" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.288377 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b5317c-da20-4444-aa23-2a8453b48b2b" containerName="extract-content" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.288571 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="cca80b7c-3bf5-40a4-bad1-1c1595dc05c9" containerName="registry-server" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.288655 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b5317c-da20-4444-aa23-2a8453b48b2b" containerName="registry-server" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.289764 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.292558 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.292620 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.292919 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.292967 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.292974 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.293227 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.293288 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.293570 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-9llm8" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.298482 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.308189 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.462307 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.462372 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.462404 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.462464 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.462539 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.462606 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.462644 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.462670 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.462722 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.462773 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.462798 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.462855 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.462922 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.462958 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.463001 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.564037 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.564161 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.564716 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.564864 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.564195 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.565565 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.565291 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.565631 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.565662 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.565687 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.566142 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.566177 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.566200 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.566246 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.566284 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.566314 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.566341 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.566369 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.566446 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.566760 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.567087 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.567093 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.568282 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.570897 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.570907 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.571464 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.572091 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.572189 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.572493 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.580538 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/da5fc4ce-0bad-4e92-9ed7-3b940e128b4f-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:18 crc kubenswrapper[4860]: I0123 08:26:18.607201 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:26:19 crc kubenswrapper[4860]: I0123 08:26:19.064003 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 23 08:26:19 crc kubenswrapper[4860]: I0123 08:26:19.546703 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f","Type":"ContainerStarted","Data":"bd8dbd4ae6125276fe15124e6d0615e66f3fbc57172d41ec03720771b4c135d8"} Jan 23 08:26:33 crc kubenswrapper[4860]: E0123 08:26:33.840938 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43" Jan 23 08:26:33 crc kubenswrapper[4860]: E0123 08:26:33.841601 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:interconnect-operator,Image:registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43,Command:[qdr-operator],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:60000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:qdr-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_QDROUTERD_IMAGE,Value:registry.redhat.io/amq7/amq-interconnect@sha256:31d87473fa684178a694f9ee331d3c80f2653f9533cb65c2a325752166a077e9,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:amq7-interconnect-operator.v1.10.20,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5b7n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod interconnect-operator-5bb49f789d-8ch5j_service-telemetry(aed90fa7-5ace-46a6-b8a1-8de00e70331e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 08:26:33 crc kubenswrapper[4860]: E0123 08:26:33.843658 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/interconnect-operator-5bb49f789d-8ch5j" podUID="aed90fa7-5ace-46a6-b8a1-8de00e70331e" Jan 23 08:26:34 crc kubenswrapper[4860]: E0123 08:26:34.635861 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43\\\"\"" pod="service-telemetry/interconnect-operator-5bb49f789d-8ch5j" podUID="aed90fa7-5ace-46a6-b8a1-8de00e70331e" Jan 23 08:26:40 crc kubenswrapper[4860]: E0123 08:26:40.685624 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/elasticsearch:7.17.20" Jan 23 08:26:40 crc kubenswrapper[4860]: E0123 08:26:40.686916 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:elastic-internal-init-filesystem,Image:registry.connect.redhat.com/elastic/elasticsearch:7.17.20,Command:[bash -c /mnt/elastic-internal/scripts/prepare-fs.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HEADLESS_SERVICE_NAME,Value:elasticsearch-es-default,ValueFrom:nil,},EnvVar{Name:PROBE_PASSWORD_PATH,Value:/mnt/elastic-internal/pod-mounted-users/elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:PROBE_USERNAME,Value:elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:READINESS_PROBE_PROTOCOL,Value:https,ValueFrom:nil,},EnvVar{Name:NSS_SDB_USE_CACHE,Value:no,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:downward-api,ReadOnly:true,MountPath:/mnt/elastic-internal/downward-api,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-bin-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-bin-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config,ReadOnly:true,MountPath:/mnt/elastic-internal/elasticsearch-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-config-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-plugins-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-plugins-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-http-certificates,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/http-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-probe-user,ReadOnly:true,MountPath:/mnt/elastic-internal/pod-mounted-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-remote-certificate-authorities,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/transport-remote-certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-scripts,ReadOnly:true,MountPath:/mnt/elastic-internal/scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-transport-certificates,ReadOnly:true,MountPath:/mnt/elastic-internal/transport-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-unicast-hosts,ReadOnly:true,MountPath:/mnt/elastic-internal/unicast-hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-xpack-file-realm,ReadOnly:true,MountPath:/mnt/elastic-internal/xpack-file-realm,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-data,ReadOnly:false,MountPath:/usr/share/elasticsearch/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-logs,ReadOnly:false,MountPath:/usr/share/elasticsearch/logs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elasticsearch-es-default-0_service-telemetry(da5fc4ce-0bad-4e92-9ed7-3b940e128b4f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:26:40 crc kubenswrapper[4860]: E0123 08:26:40.688091 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="da5fc4ce-0bad-4e92-9ed7-3b940e128b4f" Jan 23 08:26:41 crc kubenswrapper[4860]: E0123 08:26:41.686495 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="da5fc4ce-0bad-4e92-9ed7-3b940e128b4f" Jan 23 08:26:41 crc kubenswrapper[4860]: I0123 08:26:41.776040 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 23 08:26:41 crc kubenswrapper[4860]: I0123 08:26:41.805357 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 23 08:26:42 crc kubenswrapper[4860]: I0123 08:26:42.692001 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-d9lkg" event={"ID":"265698c3-44cd-419b-a5db-f35ec2ef8514","Type":"ContainerStarted","Data":"137f26ec80abe67bd39a5eea85ae3fa9f7be3d96db5d2751e28a6c10126aa1e9"} Jan 23 08:26:42 crc kubenswrapper[4860]: I0123 08:26:42.692286 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-d9lkg" Jan 23 08:26:42 crc kubenswrapper[4860]: I0123 08:26:42.693397 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-qzmzn" event={"ID":"1305c249-d5a2-488b-9c2c-2f528c3a0f49","Type":"ContainerStarted","Data":"2ca85387581a674cbdccf5fd2f7a1d0ca4a8dba9c66fad28bd6b80dfa1a817b6"} Jan 23 08:26:42 crc kubenswrapper[4860]: I0123 08:26:42.694969 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-9jckz" event={"ID":"d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b","Type":"ContainerStarted","Data":"b84cbe02663b6413574127a96ac973b72156c56486ce355c3ef63e8b38f0d518"} Jan 23 08:26:42 crc kubenswrapper[4860]: I0123 08:26:42.698507 4860 generic.go:334] "Generic (PLEG): container finished" podID="53f57f2b-fcc2-4afb-997e-97a4c3c3e184" containerID="dc40beb77f3808c2c140f0636471ad0819bfa114a5812b0e59a47e1ca04fee73" exitCode=0 Jan 23 08:26:42 crc kubenswrapper[4860]: I0123 08:26:42.698611 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" event={"ID":"53f57f2b-fcc2-4afb-997e-97a4c3c3e184","Type":"ContainerDied","Data":"dc40beb77f3808c2c140f0636471ad0819bfa114a5812b0e59a47e1ca04fee73"} Jan 23 08:26:42 crc kubenswrapper[4860]: I0123 08:26:42.704657 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-68wdp" event={"ID":"5bd5ffe6-350f-4c69-bafa-f203a0c1e7bd","Type":"ContainerStarted","Data":"d079656d8974ec3d7f02981d44ae2de99e2d62e3c730397e0cba99f7e2feb7da"} Jan 23 08:26:42 crc kubenswrapper[4860]: E0123 08:26:42.705698 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="da5fc4ce-0bad-4e92-9ed7-3b940e128b4f" Jan 23 08:26:42 crc kubenswrapper[4860]: I0123 08:26:42.725400 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-d9lkg" podStartSLOduration=3.593508656 podStartE2EDuration="53.725380117s" podCreationTimestamp="2026-01-23 08:25:49 +0000 UTC" firstStartedPulling="2026-01-23 08:25:50.389204607 +0000 UTC m=+657.017254792" lastFinishedPulling="2026-01-23 08:26:40.521076058 +0000 UTC m=+707.149126253" observedRunningTime="2026-01-23 08:26:42.71720936 +0000 UTC m=+709.345259565" watchObservedRunningTime="2026-01-23 08:26:42.725380117 +0000 UTC m=+709.353430302" Jan 23 08:26:42 crc kubenswrapper[4860]: I0123 08:26:42.751985 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-qzmzn" podStartSLOduration=3.3707257569999998 podStartE2EDuration="53.751961379s" podCreationTimestamp="2026-01-23 08:25:49 +0000 UTC" firstStartedPulling="2026-01-23 08:25:50.140078391 +0000 UTC m=+656.768128576" lastFinishedPulling="2026-01-23 08:26:40.521314013 +0000 UTC m=+707.149364198" observedRunningTime="2026-01-23 08:26:42.749922189 +0000 UTC m=+709.377972414" watchObservedRunningTime="2026-01-23 08:26:42.751961379 +0000 UTC m=+709.380011564" Jan 23 08:26:42 crc kubenswrapper[4860]: I0123 08:26:42.812930 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-68wdp" podStartSLOduration=3.166434334 podStartE2EDuration="53.8129122s" podCreationTimestamp="2026-01-23 08:25:49 +0000 UTC" firstStartedPulling="2026-01-23 08:25:49.874798686 +0000 UTC m=+656.502848871" lastFinishedPulling="2026-01-23 08:26:40.521276552 +0000 UTC m=+707.149326737" observedRunningTime="2026-01-23 08:26:42.807759406 +0000 UTC m=+709.435809621" watchObservedRunningTime="2026-01-23 08:26:42.8129122 +0000 UTC m=+709.440962395" Jan 23 08:26:42 crc kubenswrapper[4860]: I0123 08:26:42.825537 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54d658468c-9jckz" podStartSLOduration=3.494575587 podStartE2EDuration="53.825510794s" podCreationTimestamp="2026-01-23 08:25:49 +0000 UTC" firstStartedPulling="2026-01-23 08:25:50.245885357 +0000 UTC m=+656.873935542" lastFinishedPulling="2026-01-23 08:26:40.576820564 +0000 UTC m=+707.204870749" observedRunningTime="2026-01-23 08:26:42.824154652 +0000 UTC m=+709.452204907" watchObservedRunningTime="2026-01-23 08:26:42.825510794 +0000 UTC m=+709.453560999" Jan 23 08:26:43 crc kubenswrapper[4860]: E0123 08:26:43.710543 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="da5fc4ce-0bad-4e92-9ed7-3b940e128b4f" Jan 23 08:26:50 crc kubenswrapper[4860]: I0123 08:26:50.107492 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-d9lkg" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.685253 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.686246 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.689683 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.689704 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.689987 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-2dlqm" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.690229 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.697807 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.744168 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/4a61a0b9-c305-4969-8643-6c9dbee6e068-builder-dockercfg-2dlqm-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.744223 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.744331 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4a61a0b9-c305-4969-8643-6c9dbee6e068-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.744388 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.744416 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trg9r\" (UniqueName: \"kubernetes.io/projected/4a61a0b9-c305-4969-8643-6c9dbee6e068-kube-api-access-trg9r\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.744453 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.744479 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.744504 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.744529 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.744914 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4a61a0b9-c305-4969-8643-6c9dbee6e068-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.745113 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.745155 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/4a61a0b9-c305-4969-8643-6c9dbee6e068-builder-dockercfg-2dlqm-push\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.847385 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4a61a0b9-c305-4969-8643-6c9dbee6e068-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.847294 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4a61a0b9-c305-4969-8643-6c9dbee6e068-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.847532 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.847555 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trg9r\" (UniqueName: \"kubernetes.io/projected/4a61a0b9-c305-4969-8643-6c9dbee6e068-kube-api-access-trg9r\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.848153 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.848224 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.848249 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.848274 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.848301 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.848344 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4a61a0b9-c305-4969-8643-6c9dbee6e068-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.848393 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.848414 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/4a61a0b9-c305-4969-8643-6c9dbee6e068-builder-dockercfg-2dlqm-push\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.848450 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/4a61a0b9-c305-4969-8643-6c9dbee6e068-builder-dockercfg-2dlqm-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.848482 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.848750 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.849385 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.850162 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.850372 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.850859 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.850914 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4a61a0b9-c305-4969-8643-6c9dbee6e068-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.851130 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.860933 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/4a61a0b9-c305-4969-8643-6c9dbee6e068-builder-dockercfg-2dlqm-push\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.864525 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/4a61a0b9-c305-4969-8643-6c9dbee6e068-builder-dockercfg-2dlqm-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:51 crc kubenswrapper[4860]: I0123 08:26:51.878594 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trg9r\" (UniqueName: \"kubernetes.io/projected/4a61a0b9-c305-4969-8643-6c9dbee6e068-kube-api-access-trg9r\") pod \"service-telemetry-operator-1-build\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:52 crc kubenswrapper[4860]: I0123 08:26:52.004392 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:26:52 crc kubenswrapper[4860]: I0123 08:26:52.506125 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 23 08:26:52 crc kubenswrapper[4860]: I0123 08:26:52.758728 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"4a61a0b9-c305-4969-8643-6c9dbee6e068","Type":"ContainerStarted","Data":"048e4a01aecff877fbf23803e5b822b91d6b5a63e0498694f7e8f322414437dd"} Jan 23 08:26:52 crc kubenswrapper[4860]: I0123 08:26:52.761533 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" event={"ID":"53f57f2b-fcc2-4afb-997e-97a4c3c3e184","Type":"ContainerStarted","Data":"72f24ea3e1627b78b25f2c9547dcd41100e7ce2f2e2296fa650ecb7821e2e7f3"} Jan 23 08:26:55 crc kubenswrapper[4860]: I0123 08:26:55.795514 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" podStartSLOduration=17.564117529 podStartE2EDuration="1m7.795498353s" podCreationTimestamp="2026-01-23 08:25:48 +0000 UTC" firstStartedPulling="2026-01-23 08:25:50.218761552 +0000 UTC m=+656.846811737" lastFinishedPulling="2026-01-23 08:26:40.450142366 +0000 UTC m=+707.078192561" observedRunningTime="2026-01-23 08:26:55.793926733 +0000 UTC m=+722.421976918" watchObservedRunningTime="2026-01-23 08:26:55.795498353 +0000 UTC m=+722.423548538" Jan 23 08:26:56 crc kubenswrapper[4860]: I0123 08:26:56.785110 4860 generic.go:334] "Generic (PLEG): container finished" podID="53f57f2b-fcc2-4afb-997e-97a4c3c3e184" containerID="72f24ea3e1627b78b25f2c9547dcd41100e7ce2f2e2296fa650ecb7821e2e7f3" exitCode=0 Jan 23 08:26:56 crc kubenswrapper[4860]: I0123 08:26:56.785209 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" event={"ID":"53f57f2b-fcc2-4afb-997e-97a4c3c3e184","Type":"ContainerDied","Data":"72f24ea3e1627b78b25f2c9547dcd41100e7ce2f2e2296fa650ecb7821e2e7f3"} Jan 23 08:26:58 crc kubenswrapper[4860]: I0123 08:26:58.032525 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" Jan 23 08:26:58 crc kubenswrapper[4860]: I0123 08:26:58.144064 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw58d\" (UniqueName: \"kubernetes.io/projected/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-kube-api-access-nw58d\") pod \"53f57f2b-fcc2-4afb-997e-97a4c3c3e184\" (UID: \"53f57f2b-fcc2-4afb-997e-97a4c3c3e184\") " Jan 23 08:26:58 crc kubenswrapper[4860]: I0123 08:26:58.144186 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-bundle\") pod \"53f57f2b-fcc2-4afb-997e-97a4c3c3e184\" (UID: \"53f57f2b-fcc2-4afb-997e-97a4c3c3e184\") " Jan 23 08:26:58 crc kubenswrapper[4860]: I0123 08:26:58.144219 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-util\") pod \"53f57f2b-fcc2-4afb-997e-97a4c3c3e184\" (UID: \"53f57f2b-fcc2-4afb-997e-97a4c3c3e184\") " Jan 23 08:26:58 crc kubenswrapper[4860]: I0123 08:26:58.145369 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-bundle" (OuterVolumeSpecName: "bundle") pod "53f57f2b-fcc2-4afb-997e-97a4c3c3e184" (UID: "53f57f2b-fcc2-4afb-997e-97a4c3c3e184"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:26:58 crc kubenswrapper[4860]: I0123 08:26:58.153193 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-kube-api-access-nw58d" (OuterVolumeSpecName: "kube-api-access-nw58d") pod "53f57f2b-fcc2-4afb-997e-97a4c3c3e184" (UID: "53f57f2b-fcc2-4afb-997e-97a4c3c3e184"). InnerVolumeSpecName "kube-api-access-nw58d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:26:58 crc kubenswrapper[4860]: I0123 08:26:58.168576 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-util" (OuterVolumeSpecName: "util") pod "53f57f2b-fcc2-4afb-997e-97a4c3c3e184" (UID: "53f57f2b-fcc2-4afb-997e-97a4c3c3e184"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:26:58 crc kubenswrapper[4860]: I0123 08:26:58.245634 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw58d\" (UniqueName: \"kubernetes.io/projected/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-kube-api-access-nw58d\") on node \"crc\" DevicePath \"\"" Jan 23 08:26:58 crc kubenswrapper[4860]: I0123 08:26:58.245683 4860 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:26:58 crc kubenswrapper[4860]: I0123 08:26:58.245693 4860 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53f57f2b-fcc2-4afb-997e-97a4c3c3e184-util\") on node \"crc\" DevicePath \"\"" Jan 23 08:26:58 crc kubenswrapper[4860]: I0123 08:26:58.805088 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-8ch5j" event={"ID":"aed90fa7-5ace-46a6-b8a1-8de00e70331e","Type":"ContainerStarted","Data":"986698a2428f92d10e8d4745d43c05846d8483abb9631104832e205a250a158e"} Jan 23 08:26:58 crc kubenswrapper[4860]: I0123 08:26:58.806739 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" event={"ID":"53f57f2b-fcc2-4afb-997e-97a4c3c3e184","Type":"ContainerDied","Data":"708df532d16b35cf426da50306c9582201a9b6c18eebe6150db09fd01b48df89"} Jan 23 08:26:58 crc kubenswrapper[4860]: I0123 08:26:58.806799 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="708df532d16b35cf426da50306c9582201a9b6c18eebe6150db09fd01b48df89" Jan 23 08:26:58 crc kubenswrapper[4860]: I0123 08:26:58.806828 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49" Jan 23 08:26:58 crc kubenswrapper[4860]: I0123 08:26:58.808248 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f","Type":"ContainerStarted","Data":"940ce614997ef0b7a1322c24c0b322a739799647267b24437a151a9adea4cc51"} Jan 23 08:26:58 crc kubenswrapper[4860]: I0123 08:26:58.837381 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-8ch5j" podStartSLOduration=14.455247326 podStartE2EDuration="58.837359648s" podCreationTimestamp="2026-01-23 08:26:00 +0000 UTC" firstStartedPulling="2026-01-23 08:26:13.251495479 +0000 UTC m=+679.879545664" lastFinishedPulling="2026-01-23 08:26:57.633607801 +0000 UTC m=+724.261657986" observedRunningTime="2026-01-23 08:26:58.834768772 +0000 UTC m=+725.462818967" watchObservedRunningTime="2026-01-23 08:26:58.837359648 +0000 UTC m=+725.465409833" Jan 23 08:26:59 crc kubenswrapper[4860]: I0123 08:26:59.343455 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 23 08:26:59 crc kubenswrapper[4860]: I0123 08:26:59.814404 4860 generic.go:334] "Generic (PLEG): container finished" podID="da5fc4ce-0bad-4e92-9ed7-3b940e128b4f" containerID="940ce614997ef0b7a1322c24c0b322a739799647267b24437a151a9adea4cc51" exitCode=0 Jan 23 08:26:59 crc kubenswrapper[4860]: I0123 08:26:59.814451 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f","Type":"ContainerDied","Data":"940ce614997ef0b7a1322c24c0b322a739799647267b24437a151a9adea4cc51"} Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.441631 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 23 08:27:01 crc kubenswrapper[4860]: E0123 08:27:01.442181 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f57f2b-fcc2-4afb-997e-97a4c3c3e184" containerName="util" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.442196 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f57f2b-fcc2-4afb-997e-97a4c3c3e184" containerName="util" Jan 23 08:27:01 crc kubenswrapper[4860]: E0123 08:27:01.442211 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f57f2b-fcc2-4afb-997e-97a4c3c3e184" containerName="pull" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.442218 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f57f2b-fcc2-4afb-997e-97a4c3c3e184" containerName="pull" Jan 23 08:27:01 crc kubenswrapper[4860]: E0123 08:27:01.442233 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f57f2b-fcc2-4afb-997e-97a4c3c3e184" containerName="extract" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.442240 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f57f2b-fcc2-4afb-997e-97a4c3c3e184" containerName="extract" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.442365 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f57f2b-fcc2-4afb-997e-97a4c3c3e184" containerName="extract" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.443318 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.446792 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.447633 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.447792 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.465852 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.481878 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.481932 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.481955 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/45534e9f-cf6f-4610-99bb-c9e656937ee6-builder-dockercfg-2dlqm-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.481985 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbbll\" (UniqueName: \"kubernetes.io/projected/45534e9f-cf6f-4610-99bb-c9e656937ee6-kube-api-access-jbbll\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.482004 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/45534e9f-cf6f-4610-99bb-c9e656937ee6-builder-dockercfg-2dlqm-push\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.482036 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.482060 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45534e9f-cf6f-4610-99bb-c9e656937ee6-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.482106 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.482238 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.482257 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.482280 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45534e9f-cf6f-4610-99bb-c9e656937ee6-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.482300 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.583339 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.583405 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.583431 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.583459 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45534e9f-cf6f-4610-99bb-c9e656937ee6-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.583482 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.583504 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.583525 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.583545 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/45534e9f-cf6f-4610-99bb-c9e656937ee6-builder-dockercfg-2dlqm-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.583577 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.583596 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbbll\" (UniqueName: \"kubernetes.io/projected/45534e9f-cf6f-4610-99bb-c9e656937ee6-kube-api-access-jbbll\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.583622 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/45534e9f-cf6f-4610-99bb-c9e656937ee6-builder-dockercfg-2dlqm-push\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.583652 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45534e9f-cf6f-4610-99bb-c9e656937ee6-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.583783 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45534e9f-cf6f-4610-99bb-c9e656937ee6-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.584207 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.584432 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.585237 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.585289 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45534e9f-cf6f-4610-99bb-c9e656937ee6-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.585737 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.585973 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.586215 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.587936 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.605839 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/45534e9f-cf6f-4610-99bb-c9e656937ee6-builder-dockercfg-2dlqm-push\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.605922 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/45534e9f-cf6f-4610-99bb-c9e656937ee6-builder-dockercfg-2dlqm-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.622757 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbbll\" (UniqueName: \"kubernetes.io/projected/45534e9f-cf6f-4610-99bb-c9e656937ee6-kube-api-access-jbbll\") pod \"service-telemetry-operator-2-build\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:01 crc kubenswrapper[4860]: I0123 08:27:01.757831 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:27:02 crc kubenswrapper[4860]: I0123 08:27:02.155653 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 23 08:27:02 crc kubenswrapper[4860]: I0123 08:27:02.829748 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"45534e9f-cf6f-4610-99bb-c9e656937ee6","Type":"ContainerStarted","Data":"bd97bdedd78e2498e3ff26e6edeee608eaded883f40dc339c8be60298332d3ef"} Jan 23 08:27:03 crc kubenswrapper[4860]: I0123 08:27:03.847429 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wcffs"] Jan 23 08:27:03 crc kubenswrapper[4860]: I0123 08:27:03.848864 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wcffs" Jan 23 08:27:03 crc kubenswrapper[4860]: I0123 08:27:03.853062 4860 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-bk5db" Jan 23 08:27:03 crc kubenswrapper[4860]: I0123 08:27:03.853388 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 23 08:27:03 crc kubenswrapper[4860]: I0123 08:27:03.854163 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 23 08:27:03 crc kubenswrapper[4860]: I0123 08:27:03.861650 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wcffs"] Jan 23 08:27:03 crc kubenswrapper[4860]: I0123 08:27:03.919086 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftnb6\" (UniqueName: \"kubernetes.io/projected/b84e91ba-9ca2-42ad-96fd-7134bd938a3f-kube-api-access-ftnb6\") pod \"cert-manager-operator-controller-manager-5446d6888b-wcffs\" (UID: \"b84e91ba-9ca2-42ad-96fd-7134bd938a3f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wcffs" Jan 23 08:27:03 crc kubenswrapper[4860]: I0123 08:27:03.919161 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b84e91ba-9ca2-42ad-96fd-7134bd938a3f-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-wcffs\" (UID: \"b84e91ba-9ca2-42ad-96fd-7134bd938a3f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wcffs" Jan 23 08:27:04 crc kubenswrapper[4860]: I0123 08:27:04.021098 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftnb6\" (UniqueName: \"kubernetes.io/projected/b84e91ba-9ca2-42ad-96fd-7134bd938a3f-kube-api-access-ftnb6\") pod \"cert-manager-operator-controller-manager-5446d6888b-wcffs\" (UID: \"b84e91ba-9ca2-42ad-96fd-7134bd938a3f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wcffs" Jan 23 08:27:04 crc kubenswrapper[4860]: I0123 08:27:04.021196 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b84e91ba-9ca2-42ad-96fd-7134bd938a3f-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-wcffs\" (UID: \"b84e91ba-9ca2-42ad-96fd-7134bd938a3f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wcffs" Jan 23 08:27:04 crc kubenswrapper[4860]: I0123 08:27:04.021771 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b84e91ba-9ca2-42ad-96fd-7134bd938a3f-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-wcffs\" (UID: \"b84e91ba-9ca2-42ad-96fd-7134bd938a3f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wcffs" Jan 23 08:27:04 crc kubenswrapper[4860]: I0123 08:27:04.043760 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftnb6\" (UniqueName: \"kubernetes.io/projected/b84e91ba-9ca2-42ad-96fd-7134bd938a3f-kube-api-access-ftnb6\") pod \"cert-manager-operator-controller-manager-5446d6888b-wcffs\" (UID: \"b84e91ba-9ca2-42ad-96fd-7134bd938a3f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wcffs" Jan 23 08:27:04 crc kubenswrapper[4860]: I0123 08:27:04.169460 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wcffs" Jan 23 08:27:04 crc kubenswrapper[4860]: I0123 08:27:04.754042 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wcffs"] Jan 23 08:27:04 crc kubenswrapper[4860]: I0123 08:27:04.841539 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wcffs" event={"ID":"b84e91ba-9ca2-42ad-96fd-7134bd938a3f","Type":"ContainerStarted","Data":"d21b5a5122cea273b8f346fd3dbdb422f34caf16f773cea781e1068c9a00b16c"} Jan 23 08:27:05 crc kubenswrapper[4860]: I0123 08:27:05.849153 4860 generic.go:334] "Generic (PLEG): container finished" podID="da5fc4ce-0bad-4e92-9ed7-3b940e128b4f" containerID="a4bee3aa1e732152d5b4cb09957998cd31c567108e204880d0141694aa128bf8" exitCode=0 Jan 23 08:27:05 crc kubenswrapper[4860]: I0123 08:27:05.849231 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f","Type":"ContainerDied","Data":"a4bee3aa1e732152d5b4cb09957998cd31c567108e204880d0141694aa128bf8"} Jan 23 08:27:10 crc kubenswrapper[4860]: E0123 08:27:10.219189 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a908a23111a624c3fa04dc3105a7a97f48ee60105308bbb6ed42a40d63c2fe" Jan 23 08:27:10 crc kubenswrapper[4860]: E0123 08:27:10.220299 4860 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 23 08:27:10 crc kubenswrapper[4860]: init container &Container{Name:manage-dockerfile,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a908a23111a624c3fa04dc3105a7a97f48ee60105308bbb6ed42a40d63c2fe,Command:[],Args:[openshift-manage-dockerfile --v=0],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:BUILD,Value:{"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"service-telemetry-operator-1","namespace":"service-telemetry","uid":"ad82ceba-342d-4311-b792-aebf3918922d","resourceVersion":"33139","generation":1,"creationTimestamp":"2026-01-23T08:26:51Z","labels":{"build":"service-telemetry-operator","buildconfig":"service-telemetry-operator","openshift.io/build-config.name":"service-telemetry-operator","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"service-telemetry-operator","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"service-telemetry-operator","uid":"f3b6d150-c294-47ba-a18c-b096ccf5b492","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2026-01-23T08:26:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:build":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3b6d150-c294-47ba-a18c-b096ccf5b492\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:dockerfile":{},"f:type":{}},"f:strategy":{"f:dockerStrategy":{".":{},"f:from":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"FROM quay.io/operator-framework/ansible-operator:v1.38.1\n\n# temporarily switch to root user to adjust image layers\nUSER 0\n# Upstream CI builds need the additional EPEL sources for python3-passlib and python3-bcrypt but have no working repos to install epel-release\n# NO_PROXY is undefined in upstream CI builds, but defined (usually blank) during openshift builds (a possibly brittle hack)\nRUN bash -c -- 'if [ \"${NO_PROXY:-__ZZZZZ}\" == \"__ZZZZZ\" ]; then echo \"Applying upstream EPEL hacks\" \u0026\u0026 echo -e \"-----BEGIN PGP PUBLIC KEY BLOCK-----\\nmQINBGE3mOsBEACsU+XwJWDJVkItBaugXhXIIkb9oe+7aadELuVo0kBmc3HXt/Yp\\nCJW9hHEiGZ6z2jwgPqyJjZhCvcAWvgzKcvqE+9i0NItV1rzfxrBe2BtUtZmVcuE6\\n2b+SPfxQ2Hr8llaawRjt8BCFX/ZzM4/1Qk+EzlfTcEcpkMf6wdO7kD6ulBk/tbsW\\nDHX2lNcxszTf+XP9HXHWJlA2xBfP+Dk4gl4DnO2Y1xR0OSywE/QtvEbN5cY94ieu\\nn7CBy29AleMhmbnx9pw3NyxcFIAsEZHJoU4ZW9ulAJ/ogttSyAWeacW7eJGW31/Z\\n39cS+I4KXJgeGRI20RmpqfH0tuT+X5Da59YpjYxkbhSK3HYBVnNPhoJFUc2j5iKy\\nXLgkapu1xRnEJhw05kr4LCbud0NTvfecqSqa+59kuVc+zWmfTnGTYc0PXZ6Oa3rK\\n44UOmE6eAT5zd/ToleDO0VesN+EO7CXfRsm7HWGpABF5wNK3vIEF2uRr2VJMvgqS\\n9eNwhJyOzoca4xFSwCkc6dACGGkV+CqhufdFBhmcAsUotSxe3zmrBjqA0B/nxIvH\\nDVgOAMnVCe+Lmv8T0mFgqZSJdIUdKjnOLu/GRFhjDKIak4jeMBMTYpVnU+HhMHLq\\nuDiZkNEvEEGhBQmZuI8J55F/a6UURnxUwT3piyi3Pmr2IFD7ahBxPzOBCQARAQAB\\ntCdGZWRvcmEgKGVwZWw5KSA8ZXBlbEBmZWRvcmFwcm9qZWN0Lm9yZz6JAk4EEwEI\\nADgWIQT/itE0RZcQbs6BO5GKOHK/MihGfAUCYTeY6wIbDwULCQgHAgYVCgkICwIE\\nFgIDAQIeAQIXgAAKCRCKOHK/MihGfFX/EACBPWv20+ttYu1A5WvtHJPzwbj0U4yF\\n3zTQpBglQ2UfkRpYdipTlT3Ih6j5h2VmgRPtINCc/ZE28adrWpBoeFIS2YAKOCLC\\nnZYtHl2nCoLq1U7FSttUGsZ/t8uGCBgnugTfnIYcmlP1jKKA6RJAclK89evDQX5n\\nR9ZD+Cq3CBMlttvSTCht0qQVlwycedH8iWyYgP/mF0W35BIn7NuuZwWhgR00n/VG\\n4nbKPOzTWbsP45awcmivdrS74P6mL84WfkghipdmcoyVb1B8ZP4Y/Ke0RXOnLhNe\\nCfrXXvuW+Pvg2RTfwRDtehGQPAgXbmLmz2ZkV69RGIr54HJv84NDbqZovRTMr7gL\\n9k3ciCzXCiYQgM8yAyGHV0KEhFSQ1HV7gMnt9UmxbxBE2pGU7vu3CwjYga5DpwU7\\nw5wu1TmM5KgZtZvuWOTDnqDLf0cKoIbW8FeeCOn24elcj32bnQDuF9DPey1mqcvT\\n/yEo/Ushyz6CVYxN8DGgcy2M9JOsnmjDx02h6qgWGWDuKgb9jZrvRedpAQCeemEd\\nfhEs6ihqVxRFl16HxC4EVijybhAL76SsM2nbtIqW1apBQJQpXWtQwwdvgTVpdEtE\\nr4ArVJYX5LrswnWEQMOelugUG6S3ZjMfcyOa/O0364iY73vyVgaYK+2XtT2usMux\\nVL469Kj5m13T6w==\\n=Mjs/\\n-----END PGP PUBLIC KEY BLOCK-----\" \u003e /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9 \u0026\u0026 echo -e \"[epel]\\nname=Extra Packages for Enterprise Linux 9 - \\$basearch\\nmetalink=https://mirrors.fedoraproject.org/metalink?repo=epel-9\u0026arch=\\$basearch\u0026infra=\\$infra\u0026content=\\$contentdir\\nenabled=1\\ngpgcheck=1\\ngpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9\" \u003e /etc/yum.repos.d/epel.repo; fi'\n\n# update the base image to allow forward-looking optimistic updates during the testing phase, with the added benefit of helping move closer to passing security scans.\n# -- excludes ansible so it remains at 2.9 tag as shipped with the base image\n# -- installs python3-passlib and python3-bcrypt for oauth-proxy interface\n# -- cleans up the cached data from dnf to keep the image as small as possible\nRUN dnf update -y --exclude=ansible* \u0026\u0026 dnf install -y python3-passlib python3-bcrypt \u0026\u0026 dnf clean all \u0026\u0026 rm -rf /var/cache/dnf\n\nCOPY requirements.yml ${HOME}/requirements.yml\nRUN ansible-galaxy collection install -r ${HOME}/requirements.yml \\\n \u0026\u0026 chmod -R ug+rwx ${HOME}/.ansible\n\n# switch back to user 1001 when running the base image (non-root)\nUSER 1001\n\n# copy in required artifacts for the operator\nCOPY watches.yaml ${HOME}/watches.yaml\nCOPY roles/ ${HOME}/roles/\n"},"strategy":{"type":"Docker","dockerStrategy":{"from":{"kind":"DockerImage","name":"quay.io/operator-framework/ansible-operator@sha256:9895727b7f66bb88fa4c6afdefc7eecf86e6b7c1293920f866a035da9decc58e"},"pullSecret":{"name":"builder-dockercfg-2dlqm"}}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-operator:latest"},"pushSecret":{"name":"builder-dockercfg-2dlqm"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"quay.io/operator-framework/ansible-operator@sha256:9895727b7f66bb88fa4c6afdefc7eecf86e6b7c1293920f866a035da9decc58e","fromRef":{"kind":"ImageStreamTag","name":"ansible-operator:v1.38.1"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-operator:latest","config":{"kind":"BuildConfig","namespace":"service-telemetry","name":"service-telemetry-operator"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2026-01-23T08:26:51Z","lastTransitionTime":"2026-01-23T08:26:51Z"}]}} Jan 23 08:27:10 crc kubenswrapper[4860]: ,ValueFrom:nil,},EnvVar{Name:LANG,Value:C.utf8,ValueFrom:nil,},EnvVar{Name:BUILD_REGISTRIES_CONF_PATH,Value:/var/run/configs/openshift.io/build-system/registries.conf,ValueFrom:nil,},EnvVar{Name:BUILD_REGISTRIES_DIR_PATH,Value:/var/run/configs/openshift.io/build-system/registries.d,ValueFrom:nil,},EnvVar{Name:BUILD_SIGNATURE_POLICY_PATH,Value:/var/run/configs/openshift.io/build-system/policy.json,ValueFrom:nil,},EnvVar{Name:BUILD_STORAGE_CONF_PATH,Value:/var/run/configs/openshift.io/build-system/storage.conf,ValueFrom:nil,},EnvVar{Name:BUILD_BLOBCACHE_DIR,Value:/var/cache/blobs,ValueFrom:nil,},EnvVar{Name:HTTP_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:http_proxy,Value:,ValueFrom:nil,},EnvVar{Name:HTTPS_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:https_proxy,Value:,ValueFrom:nil,},EnvVar{Name:NO_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:no_proxy,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:buildworkdir,ReadOnly:false,MountPath:/tmp/build,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-system-configs,ReadOnly:true,MountPath:/var/run/configs/openshift.io/build-system,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-ca-bundles,ReadOnly:false,MountPath:/var/run/configs/openshift.io/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-proxy-ca-bundles,ReadOnly:false,MountPath:/var/run/configs/openshift.io/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-blob-cache,ReadOnly:false,MountPath:/var/cache/blobs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trg9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[CHOWN DAC_OVERRIDE],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-telemetry-operator-1-build_service-telemetry(4a61a0b9-c305-4969-8643-6c9dbee6e068): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 23 08:27:10 crc kubenswrapper[4860]: > logger="UnhandledError" Jan 23 08:27:10 crc kubenswrapper[4860]: E0123 08:27:10.221720 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manage-dockerfile\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/service-telemetry-operator-1-build" podUID="4a61a0b9-c305-4969-8643-6c9dbee6e068" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.050669 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"da5fc4ce-0bad-4e92-9ed7-3b940e128b4f","Type":"ContainerStarted","Data":"00f68e440def6d36c9940a6098ed4e970ffd1cf80616b5b8afdb7a2f09d44d9f"} Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.344638 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.512803 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trg9r\" (UniqueName: \"kubernetes.io/projected/4a61a0b9-c305-4969-8643-6c9dbee6e068-kube-api-access-trg9r\") pod \"4a61a0b9-c305-4969-8643-6c9dbee6e068\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.512861 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4a61a0b9-c305-4969-8643-6c9dbee6e068-node-pullsecrets\") pod \"4a61a0b9-c305-4969-8643-6c9dbee6e068\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.512915 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/4a61a0b9-c305-4969-8643-6c9dbee6e068-builder-dockercfg-2dlqm-push\") pod \"4a61a0b9-c305-4969-8643-6c9dbee6e068\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.512939 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-buildworkdir\") pod \"4a61a0b9-c305-4969-8643-6c9dbee6e068\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.512970 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-container-storage-run\") pod \"4a61a0b9-c305-4969-8643-6c9dbee6e068\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.512991 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-system-configs\") pod \"4a61a0b9-c305-4969-8643-6c9dbee6e068\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.512984 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a61a0b9-c305-4969-8643-6c9dbee6e068-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "4a61a0b9-c305-4969-8643-6c9dbee6e068" (UID: "4a61a0b9-c305-4969-8643-6c9dbee6e068"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.513008 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4a61a0b9-c305-4969-8643-6c9dbee6e068-buildcachedir\") pod \"4a61a0b9-c305-4969-8643-6c9dbee6e068\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.513059 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a61a0b9-c305-4969-8643-6c9dbee6e068-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "4a61a0b9-c305-4969-8643-6c9dbee6e068" (UID: "4a61a0b9-c305-4969-8643-6c9dbee6e068"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.513087 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/4a61a0b9-c305-4969-8643-6c9dbee6e068-builder-dockercfg-2dlqm-pull\") pod \"4a61a0b9-c305-4969-8643-6c9dbee6e068\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.513126 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-proxy-ca-bundles\") pod \"4a61a0b9-c305-4969-8643-6c9dbee6e068\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.513158 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-ca-bundles\") pod \"4a61a0b9-c305-4969-8643-6c9dbee6e068\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.513177 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-blob-cache\") pod \"4a61a0b9-c305-4969-8643-6c9dbee6e068\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.513197 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-container-storage-root\") pod \"4a61a0b9-c305-4969-8643-6c9dbee6e068\" (UID: \"4a61a0b9-c305-4969-8643-6c9dbee6e068\") " Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.513358 4860 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4a61a0b9-c305-4969-8643-6c9dbee6e068-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.513370 4860 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4a61a0b9-c305-4969-8643-6c9dbee6e068-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.513627 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "4a61a0b9-c305-4969-8643-6c9dbee6e068" (UID: "4a61a0b9-c305-4969-8643-6c9dbee6e068"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.513961 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "4a61a0b9-c305-4969-8643-6c9dbee6e068" (UID: "4a61a0b9-c305-4969-8643-6c9dbee6e068"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.514450 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "4a61a0b9-c305-4969-8643-6c9dbee6e068" (UID: "4a61a0b9-c305-4969-8643-6c9dbee6e068"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.514511 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "4a61a0b9-c305-4969-8643-6c9dbee6e068" (UID: "4a61a0b9-c305-4969-8643-6c9dbee6e068"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.514791 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "4a61a0b9-c305-4969-8643-6c9dbee6e068" (UID: "4a61a0b9-c305-4969-8643-6c9dbee6e068"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.514887 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "4a61a0b9-c305-4969-8643-6c9dbee6e068" (UID: "4a61a0b9-c305-4969-8643-6c9dbee6e068"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.515069 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "4a61a0b9-c305-4969-8643-6c9dbee6e068" (UID: "4a61a0b9-c305-4969-8643-6c9dbee6e068"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.529166 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a61a0b9-c305-4969-8643-6c9dbee6e068-builder-dockercfg-2dlqm-push" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-push") pod "4a61a0b9-c305-4969-8643-6c9dbee6e068" (UID: "4a61a0b9-c305-4969-8643-6c9dbee6e068"). InnerVolumeSpecName "builder-dockercfg-2dlqm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.539642 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a61a0b9-c305-4969-8643-6c9dbee6e068-builder-dockercfg-2dlqm-pull" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-pull") pod "4a61a0b9-c305-4969-8643-6c9dbee6e068" (UID: "4a61a0b9-c305-4969-8643-6c9dbee6e068"). InnerVolumeSpecName "builder-dockercfg-2dlqm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.540427 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a61a0b9-c305-4969-8643-6c9dbee6e068-kube-api-access-trg9r" (OuterVolumeSpecName: "kube-api-access-trg9r") pod "4a61a0b9-c305-4969-8643-6c9dbee6e068" (UID: "4a61a0b9-c305-4969-8643-6c9dbee6e068"). InnerVolumeSpecName "kube-api-access-trg9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.615694 4860 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.615746 4860 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.615759 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.615774 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trg9r\" (UniqueName: \"kubernetes.io/projected/4a61a0b9-c305-4969-8643-6c9dbee6e068-kube-api-access-trg9r\") on node \"crc\" DevicePath \"\"" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.615785 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/4a61a0b9-c305-4969-8643-6c9dbee6e068-builder-dockercfg-2dlqm-push\") on node \"crc\" DevicePath \"\"" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.615799 4860 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.615811 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4a61a0b9-c305-4969-8643-6c9dbee6e068-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.615822 4860 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.615834 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/4a61a0b9-c305-4969-8643-6c9dbee6e068-builder-dockercfg-2dlqm-pull\") on node \"crc\" DevicePath \"\"" Jan 23 08:27:11 crc kubenswrapper[4860]: I0123 08:27:11.615844 4860 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a61a0b9-c305-4969-8643-6c9dbee6e068-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:27:12 crc kubenswrapper[4860]: I0123 08:27:12.056622 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"4a61a0b9-c305-4969-8643-6c9dbee6e068","Type":"ContainerDied","Data":"048e4a01aecff877fbf23803e5b822b91d6b5a63e0498694f7e8f322414437dd"} Jan 23 08:27:12 crc kubenswrapper[4860]: I0123 08:27:12.056709 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 08:27:12 crc kubenswrapper[4860]: I0123 08:27:12.093304 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 23 08:27:12 crc kubenswrapper[4860]: I0123 08:27:12.110521 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 23 08:27:13 crc kubenswrapper[4860]: I0123 08:27:13.664799 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a61a0b9-c305-4969-8643-6c9dbee6e068" path="/var/lib/kubelet/pods/4a61a0b9-c305-4969-8643-6c9dbee6e068/volumes" Jan 23 08:27:15 crc kubenswrapper[4860]: I0123 08:27:15.074964 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"45534e9f-cf6f-4610-99bb-c9e656937ee6","Type":"ContainerStarted","Data":"257c670526ab82727426cb24b4c1efcee84bb9826ec0fd4d89e41dd49e92c650"} Jan 23 08:27:25 crc kubenswrapper[4860]: E0123 08:27:25.092157 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage2480060788/2\": happened during read: context canceled" image="registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911" Jan 23 08:27:25 crc kubenswrapper[4860]: E0123 08:27:25.093482 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-operator,Image:registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911,Command:[/usr/bin/cert-manager-operator],Args:[start --v=$(OPERATOR_LOG_LEVEL) --trusted-ca-configmap=$(TRUSTED_CA_CONFIGMAP_NAME) --cloud-credentials-secret=$(CLOUD_CREDENTIALS_SECRET_NAME) --unsupported-addon-features=$(UNSUPPORTED_ADDON_FEATURES)],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:cert-manager-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_WEBHOOK,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_CA_INJECTOR,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_CONTROLLER,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_ACMESOLVER,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-acmesolver-rhel9@sha256:ba937fc4b9eee31422914352c11a45b90754ba4fbe490ea45249b90afdc4e0a7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_ISTIOCSR,Value:registry.redhat.io/cert-manager/cert-manager-istio-csr-rhel9@sha256:af1ac813b8ee414ef215936f05197bc498bccbd540f3e2a93cb522221ba112bc,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.18.3,ValueFrom:nil,},EnvVar{Name:ISTIOCSR_OPERAND_IMAGE_VERSION,Value:0.14.2,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:1.18.0,ValueFrom:nil,},EnvVar{Name:OPERATOR_LOG_LEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:TRUSTED_CA_CONFIGMAP_NAME,Value:,ValueFrom:nil,},EnvVar{Name:CLOUD_CREDENTIALS_SECRET_NAME,Value:,ValueFrom:nil,},EnvVar{Name:UNSUPPORTED_ADDON_FEATURES,Value:,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cert-manager-operator.v1.18.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{33554432 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftnb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000680000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-operator-controller-manager-5446d6888b-wcffs_cert-manager-operator(b84e91ba-9ca2-42ad-96fd-7134bd938a3f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage2480060788/2\": happened during read: context canceled" logger="UnhandledError" Jan 23 08:27:25 crc kubenswrapper[4860]: E0123 08:27:25.094731 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage2480060788/2\\\": happened during read: context canceled\"" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wcffs" podUID="b84e91ba-9ca2-42ad-96fd-7134bd938a3f" Jan 23 08:27:25 crc kubenswrapper[4860]: I0123 08:27:25.137406 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:27:25 crc kubenswrapper[4860]: E0123 08:27:25.138463 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911\\\"\"" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wcffs" podUID="b84e91ba-9ca2-42ad-96fd-7134bd938a3f" Jan 23 08:27:25 crc kubenswrapper[4860]: I0123 08:27:25.222798 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=28.664321488 podStartE2EDuration="1m7.222782866s" podCreationTimestamp="2026-01-23 08:26:18 +0000 UTC" firstStartedPulling="2026-01-23 08:26:19.077746609 +0000 UTC m=+685.705796804" lastFinishedPulling="2026-01-23 08:26:57.636207997 +0000 UTC m=+724.264258182" observedRunningTime="2026-01-23 08:27:25.217646335 +0000 UTC m=+751.845696530" watchObservedRunningTime="2026-01-23 08:27:25.222782866 +0000 UTC m=+751.850833051" Jan 23 08:27:25 crc kubenswrapper[4860]: I0123 08:27:25.235793 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="da5fc4ce-0bad-4e92-9ed7-3b940e128b4f" containerName="elasticsearch" probeResult="failure" output=< Jan 23 08:27:25 crc kubenswrapper[4860]: {"timestamp": "2026-01-23T08:27:25+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 23 08:27:25 crc kubenswrapper[4860]: > Jan 23 08:27:26 crc kubenswrapper[4860]: I0123 08:27:26.219063 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="da5fc4ce-0bad-4e92-9ed7-3b940e128b4f" containerName="elasticsearch" probeResult="failure" output=< Jan 23 08:27:26 crc kubenswrapper[4860]: {"timestamp": "2026-01-23T08:27:26+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 23 08:27:26 crc kubenswrapper[4860]: > Jan 23 08:27:28 crc kubenswrapper[4860]: I0123 08:27:28.712744 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="da5fc4ce-0bad-4e92-9ed7-3b940e128b4f" containerName="elasticsearch" probeResult="failure" output=< Jan 23 08:27:28 crc kubenswrapper[4860]: {"timestamp": "2026-01-23T08:27:28+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 23 08:27:28 crc kubenswrapper[4860]: > Jan 23 08:27:33 crc kubenswrapper[4860]: I0123 08:27:33.717430 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="da5fc4ce-0bad-4e92-9ed7-3b940e128b4f" containerName="elasticsearch" probeResult="failure" output=< Jan 23 08:27:33 crc kubenswrapper[4860]: {"timestamp": "2026-01-23T08:27:33+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 23 08:27:33 crc kubenswrapper[4860]: > Jan 23 08:27:34 crc kubenswrapper[4860]: I0123 08:27:34.244178 4860 generic.go:334] "Generic (PLEG): container finished" podID="45534e9f-cf6f-4610-99bb-c9e656937ee6" containerID="257c670526ab82727426cb24b4c1efcee84bb9826ec0fd4d89e41dd49e92c650" exitCode=0 Jan 23 08:27:34 crc kubenswrapper[4860]: I0123 08:27:34.244224 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"45534e9f-cf6f-4610-99bb-c9e656937ee6","Type":"ContainerDied","Data":"257c670526ab82727426cb24b4c1efcee84bb9826ec0fd4d89e41dd49e92c650"} Jan 23 08:27:35 crc kubenswrapper[4860]: I0123 08:27:35.251108 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"45534e9f-cf6f-4610-99bb-c9e656937ee6","Type":"ContainerStarted","Data":"7909562e9264fbcd5df77dd7c76efb793eb3066f8c974051f6116afa097ea85d"} Jan 23 08:27:38 crc kubenswrapper[4860]: I0123 08:27:38.273065 4860 generic.go:334] "Generic (PLEG): container finished" podID="45534e9f-cf6f-4610-99bb-c9e656937ee6" containerID="7909562e9264fbcd5df77dd7c76efb793eb3066f8c974051f6116afa097ea85d" exitCode=0 Jan 23 08:27:38 crc kubenswrapper[4860]: I0123 08:27:38.273163 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"45534e9f-cf6f-4610-99bb-c9e656937ee6","Type":"ContainerDied","Data":"7909562e9264fbcd5df77dd7c76efb793eb3066f8c974051f6116afa097ea85d"} Jan 23 08:27:38 crc kubenswrapper[4860]: I0123 08:27:38.684647 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="da5fc4ce-0bad-4e92-9ed7-3b940e128b4f" containerName="elasticsearch" probeResult="failure" output=< Jan 23 08:27:38 crc kubenswrapper[4860]: {"timestamp": "2026-01-23T08:27:38+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 23 08:27:38 crc kubenswrapper[4860]: > Jan 23 08:27:43 crc kubenswrapper[4860]: I0123 08:27:43.685090 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="da5fc4ce-0bad-4e92-9ed7-3b940e128b4f" containerName="elasticsearch" probeResult="failure" output=< Jan 23 08:27:43 crc kubenswrapper[4860]: {"timestamp": "2026-01-23T08:27:43+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 23 08:27:43 crc kubenswrapper[4860]: > Jan 23 08:27:45 crc kubenswrapper[4860]: I0123 08:27:45.320323 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"45534e9f-cf6f-4610-99bb-c9e656937ee6","Type":"ContainerStarted","Data":"75900a7807b5b8d3e5439e34be1d173250627c5da74bdecb0889e8fe75a6efac"} Jan 23 08:27:48 crc kubenswrapper[4860]: I0123 08:27:48.414497 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=39.309352729 podStartE2EDuration="47.414461526s" podCreationTimestamp="2026-01-23 08:27:01 +0000 UTC" firstStartedPulling="2026-01-23 08:27:02.1610916 +0000 UTC m=+728.789141775" lastFinishedPulling="2026-01-23 08:27:10.266200387 +0000 UTC m=+736.894250572" observedRunningTime="2026-01-23 08:27:48.392027366 +0000 UTC m=+775.020077551" watchObservedRunningTime="2026-01-23 08:27:48.414461526 +0000 UTC m=+775.042511761" Jan 23 08:27:48 crc kubenswrapper[4860]: I0123 08:27:48.677256 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="da5fc4ce-0bad-4e92-9ed7-3b940e128b4f" containerName="elasticsearch" probeResult="failure" output=< Jan 23 08:27:48 crc kubenswrapper[4860]: {"timestamp": "2026-01-23T08:27:48+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 23 08:27:48 crc kubenswrapper[4860]: > Jan 23 08:27:53 crc kubenswrapper[4860]: I0123 08:27:53.714912 4860 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="da5fc4ce-0bad-4e92-9ed7-3b940e128b4f" containerName="elasticsearch" probeResult="failure" output=< Jan 23 08:27:53 crc kubenswrapper[4860]: {"timestamp": "2026-01-23T08:27:53+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 23 08:27:53 crc kubenswrapper[4860]: > Jan 23 08:27:56 crc kubenswrapper[4860]: I0123 08:27:56.775990 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:27:56 crc kubenswrapper[4860]: I0123 08:27:56.776613 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:27:58 crc kubenswrapper[4860]: I0123 08:27:58.430466 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wcffs" event={"ID":"b84e91ba-9ca2-42ad-96fd-7134bd938a3f","Type":"ContainerStarted","Data":"aaec4c32eedac469463e3933092614f542bcd93f2123b69868ba8d45012d31a2"} Jan 23 08:27:58 crc kubenswrapper[4860]: I0123 08:27:58.457060 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-wcffs" podStartSLOduration=2.958531653 podStartE2EDuration="55.457039688s" podCreationTimestamp="2026-01-23 08:27:03 +0000 UTC" firstStartedPulling="2026-01-23 08:27:04.772774616 +0000 UTC m=+731.400824811" lastFinishedPulling="2026-01-23 08:27:57.271282671 +0000 UTC m=+783.899332846" observedRunningTime="2026-01-23 08:27:58.453139934 +0000 UTC m=+785.081190129" watchObservedRunningTime="2026-01-23 08:27:58.457039688 +0000 UTC m=+785.085089873" Jan 23 08:27:58 crc kubenswrapper[4860]: I0123 08:27:58.916987 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 23 08:28:01 crc kubenswrapper[4860]: I0123 08:28:01.838262 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-n7hvx"] Jan 23 08:28:01 crc kubenswrapper[4860]: I0123 08:28:01.839215 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-n7hvx" Jan 23 08:28:01 crc kubenswrapper[4860]: I0123 08:28:01.841721 4860 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-7mg85" Jan 23 08:28:01 crc kubenswrapper[4860]: I0123 08:28:01.842282 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 23 08:28:01 crc kubenswrapper[4860]: I0123 08:28:01.842380 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 23 08:28:01 crc kubenswrapper[4860]: I0123 08:28:01.849509 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-n7hvx"] Jan 23 08:28:01 crc kubenswrapper[4860]: I0123 08:28:01.942271 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83613435-16bb-432d-87af-71aa80fecf79-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-n7hvx\" (UID: \"83613435-16bb-432d-87af-71aa80fecf79\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-n7hvx" Jan 23 08:28:01 crc kubenswrapper[4860]: I0123 08:28:01.942326 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tprp\" (UniqueName: \"kubernetes.io/projected/83613435-16bb-432d-87af-71aa80fecf79-kube-api-access-4tprp\") pod \"cert-manager-cainjector-855d9ccff4-n7hvx\" (UID: \"83613435-16bb-432d-87af-71aa80fecf79\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-n7hvx" Jan 23 08:28:02 crc kubenswrapper[4860]: I0123 08:28:02.043944 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83613435-16bb-432d-87af-71aa80fecf79-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-n7hvx\" (UID: \"83613435-16bb-432d-87af-71aa80fecf79\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-n7hvx" Jan 23 08:28:02 crc kubenswrapper[4860]: I0123 08:28:02.044009 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tprp\" (UniqueName: \"kubernetes.io/projected/83613435-16bb-432d-87af-71aa80fecf79-kube-api-access-4tprp\") pod \"cert-manager-cainjector-855d9ccff4-n7hvx\" (UID: \"83613435-16bb-432d-87af-71aa80fecf79\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-n7hvx" Jan 23 08:28:02 crc kubenswrapper[4860]: I0123 08:28:02.067981 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83613435-16bb-432d-87af-71aa80fecf79-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-n7hvx\" (UID: \"83613435-16bb-432d-87af-71aa80fecf79\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-n7hvx" Jan 23 08:28:02 crc kubenswrapper[4860]: I0123 08:28:02.068942 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tprp\" (UniqueName: \"kubernetes.io/projected/83613435-16bb-432d-87af-71aa80fecf79-kube-api-access-4tprp\") pod \"cert-manager-cainjector-855d9ccff4-n7hvx\" (UID: \"83613435-16bb-432d-87af-71aa80fecf79\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-n7hvx" Jan 23 08:28:02 crc kubenswrapper[4860]: I0123 08:28:02.157549 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-n7hvx" Jan 23 08:28:02 crc kubenswrapper[4860]: I0123 08:28:02.441750 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-n7hvx"] Jan 23 08:28:02 crc kubenswrapper[4860]: W0123 08:28:02.448461 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83613435_16bb_432d_87af_71aa80fecf79.slice/crio-5b6a347847c57a2c35e7179054c33de7a9b5f6cced83f8ae730a48cc6487091c WatchSource:0}: Error finding container 5b6a347847c57a2c35e7179054c33de7a9b5f6cced83f8ae730a48cc6487091c: Status 404 returned error can't find the container with id 5b6a347847c57a2c35e7179054c33de7a9b5f6cced83f8ae730a48cc6487091c Jan 23 08:28:02 crc kubenswrapper[4860]: I0123 08:28:02.457514 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-n7hvx" event={"ID":"83613435-16bb-432d-87af-71aa80fecf79","Type":"ContainerStarted","Data":"5b6a347847c57a2c35e7179054c33de7a9b5f6cced83f8ae730a48cc6487091c"} Jan 23 08:28:10 crc kubenswrapper[4860]: I0123 08:28:10.589910 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-75t87"] Jan 23 08:28:10 crc kubenswrapper[4860]: I0123 08:28:10.591311 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-75t87" Jan 23 08:28:10 crc kubenswrapper[4860]: I0123 08:28:10.593282 4860 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-w8tm4" Jan 23 08:28:10 crc kubenswrapper[4860]: I0123 08:28:10.608296 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-75t87"] Jan 23 08:28:10 crc kubenswrapper[4860]: I0123 08:28:10.760984 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpmqs\" (UniqueName: \"kubernetes.io/projected/89ef0dff-b1b7-49c5-9fcc-d94ffaefcf7b-kube-api-access-qpmqs\") pod \"cert-manager-webhook-f4fb5df64-75t87\" (UID: \"89ef0dff-b1b7-49c5-9fcc-d94ffaefcf7b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-75t87" Jan 23 08:28:10 crc kubenswrapper[4860]: I0123 08:28:10.761137 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89ef0dff-b1b7-49c5-9fcc-d94ffaefcf7b-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-75t87\" (UID: \"89ef0dff-b1b7-49c5-9fcc-d94ffaefcf7b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-75t87" Jan 23 08:28:10 crc kubenswrapper[4860]: I0123 08:28:10.862414 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpmqs\" (UniqueName: \"kubernetes.io/projected/89ef0dff-b1b7-49c5-9fcc-d94ffaefcf7b-kube-api-access-qpmqs\") pod \"cert-manager-webhook-f4fb5df64-75t87\" (UID: \"89ef0dff-b1b7-49c5-9fcc-d94ffaefcf7b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-75t87" Jan 23 08:28:10 crc kubenswrapper[4860]: I0123 08:28:10.862506 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89ef0dff-b1b7-49c5-9fcc-d94ffaefcf7b-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-75t87\" (UID: \"89ef0dff-b1b7-49c5-9fcc-d94ffaefcf7b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-75t87" Jan 23 08:28:10 crc kubenswrapper[4860]: I0123 08:28:10.883487 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89ef0dff-b1b7-49c5-9fcc-d94ffaefcf7b-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-75t87\" (UID: \"89ef0dff-b1b7-49c5-9fcc-d94ffaefcf7b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-75t87" Jan 23 08:28:10 crc kubenswrapper[4860]: I0123 08:28:10.902411 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpmqs\" (UniqueName: \"kubernetes.io/projected/89ef0dff-b1b7-49c5-9fcc-d94ffaefcf7b-kube-api-access-qpmqs\") pod \"cert-manager-webhook-f4fb5df64-75t87\" (UID: \"89ef0dff-b1b7-49c5-9fcc-d94ffaefcf7b\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-75t87" Jan 23 08:28:10 crc kubenswrapper[4860]: I0123 08:28:10.957561 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-75t87" Jan 23 08:28:15 crc kubenswrapper[4860]: I0123 08:28:15.581542 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-qptpz"] Jan 23 08:28:15 crc kubenswrapper[4860]: I0123 08:28:15.582768 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-qptpz" Jan 23 08:28:15 crc kubenswrapper[4860]: I0123 08:28:15.584424 4860 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-nf2jp" Jan 23 08:28:15 crc kubenswrapper[4860]: I0123 08:28:15.587868 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-qptpz"] Jan 23 08:28:15 crc kubenswrapper[4860]: I0123 08:28:15.721669 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f30f168-993f-4527-9e0b-2b12457e1547-bound-sa-token\") pod \"cert-manager-86cb77c54b-qptpz\" (UID: \"0f30f168-993f-4527-9e0b-2b12457e1547\") " pod="cert-manager/cert-manager-86cb77c54b-qptpz" Jan 23 08:28:15 crc kubenswrapper[4860]: I0123 08:28:15.721763 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6kbp\" (UniqueName: \"kubernetes.io/projected/0f30f168-993f-4527-9e0b-2b12457e1547-kube-api-access-d6kbp\") pod \"cert-manager-86cb77c54b-qptpz\" (UID: \"0f30f168-993f-4527-9e0b-2b12457e1547\") " pod="cert-manager/cert-manager-86cb77c54b-qptpz" Jan 23 08:28:15 crc kubenswrapper[4860]: I0123 08:28:15.823125 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f30f168-993f-4527-9e0b-2b12457e1547-bound-sa-token\") pod \"cert-manager-86cb77c54b-qptpz\" (UID: \"0f30f168-993f-4527-9e0b-2b12457e1547\") " pod="cert-manager/cert-manager-86cb77c54b-qptpz" Jan 23 08:28:15 crc kubenswrapper[4860]: I0123 08:28:15.823214 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6kbp\" (UniqueName: \"kubernetes.io/projected/0f30f168-993f-4527-9e0b-2b12457e1547-kube-api-access-d6kbp\") pod \"cert-manager-86cb77c54b-qptpz\" (UID: \"0f30f168-993f-4527-9e0b-2b12457e1547\") " pod="cert-manager/cert-manager-86cb77c54b-qptpz" Jan 23 08:28:15 crc kubenswrapper[4860]: I0123 08:28:15.843722 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6kbp\" (UniqueName: \"kubernetes.io/projected/0f30f168-993f-4527-9e0b-2b12457e1547-kube-api-access-d6kbp\") pod \"cert-manager-86cb77c54b-qptpz\" (UID: \"0f30f168-993f-4527-9e0b-2b12457e1547\") " pod="cert-manager/cert-manager-86cb77c54b-qptpz" Jan 23 08:28:15 crc kubenswrapper[4860]: I0123 08:28:15.845212 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f30f168-993f-4527-9e0b-2b12457e1547-bound-sa-token\") pod \"cert-manager-86cb77c54b-qptpz\" (UID: \"0f30f168-993f-4527-9e0b-2b12457e1547\") " pod="cert-manager/cert-manager-86cb77c54b-qptpz" Jan 23 08:28:15 crc kubenswrapper[4860]: I0123 08:28:15.902524 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-qptpz" Jan 23 08:28:18 crc kubenswrapper[4860]: I0123 08:28:18.861501 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-75t87"] Jan 23 08:28:18 crc kubenswrapper[4860]: W0123 08:28:18.883480 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89ef0dff_b1b7_49c5_9fcc_d94ffaefcf7b.slice/crio-e6b59c15eca900b702dd2a5096edaf859578c18a188b6f3e0239884ecda841f4 WatchSource:0}: Error finding container e6b59c15eca900b702dd2a5096edaf859578c18a188b6f3e0239884ecda841f4: Status 404 returned error can't find the container with id e6b59c15eca900b702dd2a5096edaf859578c18a188b6f3e0239884ecda841f4 Jan 23 08:28:18 crc kubenswrapper[4860]: I0123 08:28:18.950328 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-qptpz"] Jan 23 08:28:18 crc kubenswrapper[4860]: W0123 08:28:18.951654 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f30f168_993f_4527_9e0b_2b12457e1547.slice/crio-ca3a3ce79c5702d5830d9b1423791a2cd04a2dec30783708a156fcbe9f022ea0 WatchSource:0}: Error finding container ca3a3ce79c5702d5830d9b1423791a2cd04a2dec30783708a156fcbe9f022ea0: Status 404 returned error can't find the container with id ca3a3ce79c5702d5830d9b1423791a2cd04a2dec30783708a156fcbe9f022ea0 Jan 23 08:28:19 crc kubenswrapper[4860]: I0123 08:28:19.566959 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-75t87" event={"ID":"89ef0dff-b1b7-49c5-9fcc-d94ffaefcf7b","Type":"ContainerStarted","Data":"15932f377e32d8b1ab5cb6efbe0ea095bd73f80ea1982b79be84dc319d39b3f2"} Jan 23 08:28:19 crc kubenswrapper[4860]: I0123 08:28:19.567309 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-75t87" Jan 23 08:28:19 crc kubenswrapper[4860]: I0123 08:28:19.567324 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-75t87" event={"ID":"89ef0dff-b1b7-49c5-9fcc-d94ffaefcf7b","Type":"ContainerStarted","Data":"e6b59c15eca900b702dd2a5096edaf859578c18a188b6f3e0239884ecda841f4"} Jan 23 08:28:19 crc kubenswrapper[4860]: I0123 08:28:19.568418 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-n7hvx" event={"ID":"83613435-16bb-432d-87af-71aa80fecf79","Type":"ContainerStarted","Data":"92aece860978c34428bf2c6219931aa6e2662a2deb42f293427153de1ea92aa5"} Jan 23 08:28:19 crc kubenswrapper[4860]: I0123 08:28:19.569866 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-qptpz" event={"ID":"0f30f168-993f-4527-9e0b-2b12457e1547","Type":"ContainerStarted","Data":"9e837418318dc6186d0c581c69b48148c428a25d88c9670c6074abf71f046739"} Jan 23 08:28:19 crc kubenswrapper[4860]: I0123 08:28:19.569893 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-qptpz" event={"ID":"0f30f168-993f-4527-9e0b-2b12457e1547","Type":"ContainerStarted","Data":"ca3a3ce79c5702d5830d9b1423791a2cd04a2dec30783708a156fcbe9f022ea0"} Jan 23 08:28:19 crc kubenswrapper[4860]: I0123 08:28:19.582612 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-75t87" podStartSLOduration=9.582590294 podStartE2EDuration="9.582590294s" podCreationTimestamp="2026-01-23 08:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:28:19.580835602 +0000 UTC m=+806.208885787" watchObservedRunningTime="2026-01-23 08:28:19.582590294 +0000 UTC m=+806.210640479" Jan 23 08:28:19 crc kubenswrapper[4860]: I0123 08:28:19.596485 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-qptpz" podStartSLOduration=4.596472149 podStartE2EDuration="4.596472149s" podCreationTimestamp="2026-01-23 08:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:28:19.593488447 +0000 UTC m=+806.221538642" watchObservedRunningTime="2026-01-23 08:28:19.596472149 +0000 UTC m=+806.224522334" Jan 23 08:28:19 crc kubenswrapper[4860]: I0123 08:28:19.619203 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-n7hvx" podStartSLOduration=2.620094839 podStartE2EDuration="18.619185296s" podCreationTimestamp="2026-01-23 08:28:01 +0000 UTC" firstStartedPulling="2026-01-23 08:28:02.450736338 +0000 UTC m=+789.078786533" lastFinishedPulling="2026-01-23 08:28:18.449826805 +0000 UTC m=+805.077876990" observedRunningTime="2026-01-23 08:28:19.615956819 +0000 UTC m=+806.244007004" watchObservedRunningTime="2026-01-23 08:28:19.619185296 +0000 UTC m=+806.247235481" Jan 23 08:28:25 crc kubenswrapper[4860]: I0123 08:28:25.960842 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-75t87" Jan 23 08:28:26 crc kubenswrapper[4860]: I0123 08:28:26.775807 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:28:26 crc kubenswrapper[4860]: I0123 08:28:26.775911 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:28:52 crc kubenswrapper[4860]: I0123 08:28:52.947628 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z99pz"] Jan 23 08:28:52 crc kubenswrapper[4860]: I0123 08:28:52.949216 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:28:52 crc kubenswrapper[4860]: I0123 08:28:52.971096 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z99pz"] Jan 23 08:28:53 crc kubenswrapper[4860]: I0123 08:28:53.020762 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96r5h\" (UniqueName: \"kubernetes.io/projected/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-kube-api-access-96r5h\") pod \"community-operators-z99pz\" (UID: \"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98\") " pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:28:53 crc kubenswrapper[4860]: I0123 08:28:53.020811 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-utilities\") pod \"community-operators-z99pz\" (UID: \"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98\") " pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:28:53 crc kubenswrapper[4860]: I0123 08:28:53.020835 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-catalog-content\") pod \"community-operators-z99pz\" (UID: \"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98\") " pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:28:53 crc kubenswrapper[4860]: I0123 08:28:53.122037 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-catalog-content\") pod \"community-operators-z99pz\" (UID: \"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98\") " pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:28:53 crc kubenswrapper[4860]: I0123 08:28:53.122161 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96r5h\" (UniqueName: \"kubernetes.io/projected/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-kube-api-access-96r5h\") pod \"community-operators-z99pz\" (UID: \"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98\") " pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:28:53 crc kubenswrapper[4860]: I0123 08:28:53.122189 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-utilities\") pod \"community-operators-z99pz\" (UID: \"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98\") " pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:28:53 crc kubenswrapper[4860]: I0123 08:28:53.122631 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-catalog-content\") pod \"community-operators-z99pz\" (UID: \"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98\") " pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:28:53 crc kubenswrapper[4860]: I0123 08:28:53.122651 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-utilities\") pod \"community-operators-z99pz\" (UID: \"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98\") " pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:28:53 crc kubenswrapper[4860]: I0123 08:28:53.142206 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96r5h\" (UniqueName: \"kubernetes.io/projected/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-kube-api-access-96r5h\") pod \"community-operators-z99pz\" (UID: \"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98\") " pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:28:53 crc kubenswrapper[4860]: I0123 08:28:53.270131 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:28:53 crc kubenswrapper[4860]: I0123 08:28:53.771454 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z99pz"] Jan 23 08:28:53 crc kubenswrapper[4860]: W0123 08:28:53.777638 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ec7f8d0_ce9b_478c_bf5c_ea0c081b4d98.slice/crio-22d05cc8d2d6f82518b14b327e566e5369c910e90314a5f6d2c0d557bdefaa63 WatchSource:0}: Error finding container 22d05cc8d2d6f82518b14b327e566e5369c910e90314a5f6d2c0d557bdefaa63: Status 404 returned error can't find the container with id 22d05cc8d2d6f82518b14b327e566e5369c910e90314a5f6d2c0d557bdefaa63 Jan 23 08:28:54 crc kubenswrapper[4860]: I0123 08:28:54.781601 4860 generic.go:334] "Generic (PLEG): container finished" podID="4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98" containerID="2e50d868a90f9ae30194212ff9cadaaecf4a4a3b19d45b2d2e9090076d876eb8" exitCode=0 Jan 23 08:28:54 crc kubenswrapper[4860]: I0123 08:28:54.781768 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z99pz" event={"ID":"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98","Type":"ContainerDied","Data":"2e50d868a90f9ae30194212ff9cadaaecf4a4a3b19d45b2d2e9090076d876eb8"} Jan 23 08:28:54 crc kubenswrapper[4860]: I0123 08:28:54.781889 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z99pz" event={"ID":"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98","Type":"ContainerStarted","Data":"22d05cc8d2d6f82518b14b327e566e5369c910e90314a5f6d2c0d557bdefaa63"} Jan 23 08:28:56 crc kubenswrapper[4860]: I0123 08:28:56.775454 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:28:56 crc kubenswrapper[4860]: I0123 08:28:56.776060 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:28:56 crc kubenswrapper[4860]: I0123 08:28:56.776112 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:28:56 crc kubenswrapper[4860]: I0123 08:28:56.776777 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"52a615d6aca47e73053df92f20f2d23afd3eb30795f9436f06952408ac8ca4f6"} pod="openshift-machine-config-operator/machine-config-daemon-tk8df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:28:56 crc kubenswrapper[4860]: I0123 08:28:56.776833 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" containerID="cri-o://52a615d6aca47e73053df92f20f2d23afd3eb30795f9436f06952408ac8ca4f6" gracePeriod=600 Jan 23 08:28:56 crc kubenswrapper[4860]: I0123 08:28:56.800132 4860 generic.go:334] "Generic (PLEG): container finished" podID="4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98" containerID="f9f47b70e4c8e55b09578b8e94e2b5da0e3ecf245e19b5ba6e9ee84c487ae565" exitCode=0 Jan 23 08:28:56 crc kubenswrapper[4860]: I0123 08:28:56.800182 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z99pz" event={"ID":"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98","Type":"ContainerDied","Data":"f9f47b70e4c8e55b09578b8e94e2b5da0e3ecf245e19b5ba6e9ee84c487ae565"} Jan 23 08:28:58 crc kubenswrapper[4860]: I0123 08:28:58.816330 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z99pz" event={"ID":"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98","Type":"ContainerStarted","Data":"7dac9b7766de05749864ea6e8efc26a22f2521d739f61eb6aac187093e9e3238"} Jan 23 08:28:58 crc kubenswrapper[4860]: I0123 08:28:58.824320 4860 generic.go:334] "Generic (PLEG): container finished" podID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerID="52a615d6aca47e73053df92f20f2d23afd3eb30795f9436f06952408ac8ca4f6" exitCode=0 Jan 23 08:28:58 crc kubenswrapper[4860]: I0123 08:28:58.824395 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerDied","Data":"52a615d6aca47e73053df92f20f2d23afd3eb30795f9436f06952408ac8ca4f6"} Jan 23 08:28:58 crc kubenswrapper[4860]: I0123 08:28:58.824467 4860 scope.go:117] "RemoveContainer" containerID="a5f19eb6680810f114123e7b69b3ddca9ec3e33281e2c5954aa55c863d5a13f8" Jan 23 08:28:59 crc kubenswrapper[4860]: I0123 08:28:59.832694 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerStarted","Data":"55ddb61fc36e2367c62e594d1b3c5e8526e9c5309c997e078accc938d2704751"} Jan 23 08:28:59 crc kubenswrapper[4860]: I0123 08:28:59.855749 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z99pz" podStartSLOduration=4.197643063 podStartE2EDuration="7.855728357s" podCreationTimestamp="2026-01-23 08:28:52 +0000 UTC" firstStartedPulling="2026-01-23 08:28:54.783400109 +0000 UTC m=+841.411450294" lastFinishedPulling="2026-01-23 08:28:58.441485403 +0000 UTC m=+845.069535588" observedRunningTime="2026-01-23 08:28:58.859255351 +0000 UTC m=+845.487305536" watchObservedRunningTime="2026-01-23 08:28:59.855728357 +0000 UTC m=+846.483778542" Jan 23 08:29:03 crc kubenswrapper[4860]: I0123 08:29:03.270843 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:29:03 crc kubenswrapper[4860]: I0123 08:29:03.272442 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:29:03 crc kubenswrapper[4860]: I0123 08:29:03.312190 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:29:03 crc kubenswrapper[4860]: I0123 08:29:03.896820 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:29:03 crc kubenswrapper[4860]: I0123 08:29:03.941208 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z99pz"] Jan 23 08:29:05 crc kubenswrapper[4860]: I0123 08:29:05.980177 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z99pz" podUID="4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98" containerName="registry-server" containerID="cri-o://7dac9b7766de05749864ea6e8efc26a22f2521d739f61eb6aac187093e9e3238" gracePeriod=2 Jan 23 08:29:06 crc kubenswrapper[4860]: I0123 08:29:06.987328 4860 generic.go:334] "Generic (PLEG): container finished" podID="4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98" containerID="7dac9b7766de05749864ea6e8efc26a22f2521d739f61eb6aac187093e9e3238" exitCode=0 Jan 23 08:29:06 crc kubenswrapper[4860]: I0123 08:29:06.987550 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z99pz" event={"ID":"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98","Type":"ContainerDied","Data":"7dac9b7766de05749864ea6e8efc26a22f2521d739f61eb6aac187093e9e3238"} Jan 23 08:29:07 crc kubenswrapper[4860]: I0123 08:29:07.720343 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:29:07 crc kubenswrapper[4860]: I0123 08:29:07.822155 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-catalog-content\") pod \"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98\" (UID: \"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98\") " Jan 23 08:29:07 crc kubenswrapper[4860]: I0123 08:29:07.822551 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-utilities\") pod \"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98\" (UID: \"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98\") " Jan 23 08:29:07 crc kubenswrapper[4860]: I0123 08:29:07.822598 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96r5h\" (UniqueName: \"kubernetes.io/projected/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-kube-api-access-96r5h\") pod \"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98\" (UID: \"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98\") " Jan 23 08:29:07 crc kubenswrapper[4860]: I0123 08:29:07.823464 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-utilities" (OuterVolumeSpecName: "utilities") pod "4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98" (UID: "4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:29:07 crc kubenswrapper[4860]: I0123 08:29:07.827534 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-kube-api-access-96r5h" (OuterVolumeSpecName: "kube-api-access-96r5h") pod "4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98" (UID: "4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98"). InnerVolumeSpecName "kube-api-access-96r5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:29:07 crc kubenswrapper[4860]: I0123 08:29:07.877479 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98" (UID: "4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:29:07 crc kubenswrapper[4860]: I0123 08:29:07.923808 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:29:07 crc kubenswrapper[4860]: I0123 08:29:07.923844 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:29:07 crc kubenswrapper[4860]: I0123 08:29:07.923856 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96r5h\" (UniqueName: \"kubernetes.io/projected/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98-kube-api-access-96r5h\") on node \"crc\" DevicePath \"\"" Jan 23 08:29:07 crc kubenswrapper[4860]: I0123 08:29:07.995665 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z99pz" event={"ID":"4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98","Type":"ContainerDied","Data":"22d05cc8d2d6f82518b14b327e566e5369c910e90314a5f6d2c0d557bdefaa63"} Jan 23 08:29:07 crc kubenswrapper[4860]: I0123 08:29:07.996061 4860 scope.go:117] "RemoveContainer" containerID="7dac9b7766de05749864ea6e8efc26a22f2521d739f61eb6aac187093e9e3238" Jan 23 08:29:07 crc kubenswrapper[4860]: I0123 08:29:07.996005 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z99pz" Jan 23 08:29:08 crc kubenswrapper[4860]: I0123 08:29:08.030740 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z99pz"] Jan 23 08:29:08 crc kubenswrapper[4860]: I0123 08:29:08.030795 4860 scope.go:117] "RemoveContainer" containerID="f9f47b70e4c8e55b09578b8e94e2b5da0e3ecf245e19b5ba6e9ee84c487ae565" Jan 23 08:29:08 crc kubenswrapper[4860]: I0123 08:29:08.035797 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z99pz"] Jan 23 08:29:08 crc kubenswrapper[4860]: I0123 08:29:08.056150 4860 scope.go:117] "RemoveContainer" containerID="2e50d868a90f9ae30194212ff9cadaaecf4a4a3b19d45b2d2e9090076d876eb8" Jan 23 08:29:09 crc kubenswrapper[4860]: I0123 08:29:09.666583 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98" path="/var/lib/kubelet/pods/4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98/volumes" Jan 23 08:29:41 crc kubenswrapper[4860]: I0123 08:29:41.216003 4860 generic.go:334] "Generic (PLEG): container finished" podID="45534e9f-cf6f-4610-99bb-c9e656937ee6" containerID="75900a7807b5b8d3e5439e34be1d173250627c5da74bdecb0889e8fe75a6efac" exitCode=0 Jan 23 08:29:41 crc kubenswrapper[4860]: I0123 08:29:41.216053 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"45534e9f-cf6f-4610-99bb-c9e656937ee6","Type":"ContainerDied","Data":"75900a7807b5b8d3e5439e34be1d173250627c5da74bdecb0889e8fe75a6efac"} Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.433837 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.614509 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45534e9f-cf6f-4610-99bb-c9e656937ee6-buildcachedir\") pod \"45534e9f-cf6f-4610-99bb-c9e656937ee6\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.614553 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-blob-cache\") pod \"45534e9f-cf6f-4610-99bb-c9e656937ee6\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.614577 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbbll\" (UniqueName: \"kubernetes.io/projected/45534e9f-cf6f-4610-99bb-c9e656937ee6-kube-api-access-jbbll\") pod \"45534e9f-cf6f-4610-99bb-c9e656937ee6\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.614593 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-container-storage-root\") pod \"45534e9f-cf6f-4610-99bb-c9e656937ee6\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.614627 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/45534e9f-cf6f-4610-99bb-c9e656937ee6-builder-dockercfg-2dlqm-pull\") pod \"45534e9f-cf6f-4610-99bb-c9e656937ee6\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.614660 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/45534e9f-cf6f-4610-99bb-c9e656937ee6-builder-dockercfg-2dlqm-push\") pod \"45534e9f-cf6f-4610-99bb-c9e656937ee6\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.614696 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45534e9f-cf6f-4610-99bb-c9e656937ee6-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "45534e9f-cf6f-4610-99bb-c9e656937ee6" (UID: "45534e9f-cf6f-4610-99bb-c9e656937ee6"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.614714 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-proxy-ca-bundles\") pod \"45534e9f-cf6f-4610-99bb-c9e656937ee6\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.614789 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-buildworkdir\") pod \"45534e9f-cf6f-4610-99bb-c9e656937ee6\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.614834 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-ca-bundles\") pod \"45534e9f-cf6f-4610-99bb-c9e656937ee6\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.614884 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-system-configs\") pod \"45534e9f-cf6f-4610-99bb-c9e656937ee6\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.614926 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-container-storage-run\") pod \"45534e9f-cf6f-4610-99bb-c9e656937ee6\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.614953 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45534e9f-cf6f-4610-99bb-c9e656937ee6-node-pullsecrets\") pod \"45534e9f-cf6f-4610-99bb-c9e656937ee6\" (UID: \"45534e9f-cf6f-4610-99bb-c9e656937ee6\") " Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.615406 4860 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45534e9f-cf6f-4610-99bb-c9e656937ee6-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.615446 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45534e9f-cf6f-4610-99bb-c9e656937ee6-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "45534e9f-cf6f-4610-99bb-c9e656937ee6" (UID: "45534e9f-cf6f-4610-99bb-c9e656937ee6"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.615488 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "45534e9f-cf6f-4610-99bb-c9e656937ee6" (UID: "45534e9f-cf6f-4610-99bb-c9e656937ee6"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.615875 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "45534e9f-cf6f-4610-99bb-c9e656937ee6" (UID: "45534e9f-cf6f-4610-99bb-c9e656937ee6"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.616589 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "45534e9f-cf6f-4610-99bb-c9e656937ee6" (UID: "45534e9f-cf6f-4610-99bb-c9e656937ee6"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.616821 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "45534e9f-cf6f-4610-99bb-c9e656937ee6" (UID: "45534e9f-cf6f-4610-99bb-c9e656937ee6"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.620415 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45534e9f-cf6f-4610-99bb-c9e656937ee6-builder-dockercfg-2dlqm-pull" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-pull") pod "45534e9f-cf6f-4610-99bb-c9e656937ee6" (UID: "45534e9f-cf6f-4610-99bb-c9e656937ee6"). InnerVolumeSpecName "builder-dockercfg-2dlqm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.624313 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45534e9f-cf6f-4610-99bb-c9e656937ee6-builder-dockercfg-2dlqm-push" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-push") pod "45534e9f-cf6f-4610-99bb-c9e656937ee6" (UID: "45534e9f-cf6f-4610-99bb-c9e656937ee6"). InnerVolumeSpecName "builder-dockercfg-2dlqm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.624344 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45534e9f-cf6f-4610-99bb-c9e656937ee6-kube-api-access-jbbll" (OuterVolumeSpecName: "kube-api-access-jbbll") pod "45534e9f-cf6f-4610-99bb-c9e656937ee6" (UID: "45534e9f-cf6f-4610-99bb-c9e656937ee6"). InnerVolumeSpecName "kube-api-access-jbbll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.643961 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "45534e9f-cf6f-4610-99bb-c9e656937ee6" (UID: "45534e9f-cf6f-4610-99bb-c9e656937ee6"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.716890 4860 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.716931 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.716943 4860 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45534e9f-cf6f-4610-99bb-c9e656937ee6-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.716954 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbbll\" (UniqueName: \"kubernetes.io/projected/45534e9f-cf6f-4610-99bb-c9e656937ee6-kube-api-access-jbbll\") on node \"crc\" DevicePath \"\"" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.716966 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/45534e9f-cf6f-4610-99bb-c9e656937ee6-builder-dockercfg-2dlqm-pull\") on node \"crc\" DevicePath \"\"" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.716979 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/45534e9f-cf6f-4610-99bb-c9e656937ee6-builder-dockercfg-2dlqm-push\") on node \"crc\" DevicePath \"\"" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.716992 4860 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.717005 4860 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.717034 4860 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.795889 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "45534e9f-cf6f-4610-99bb-c9e656937ee6" (UID: "45534e9f-cf6f-4610-99bb-c9e656937ee6"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:29:42 crc kubenswrapper[4860]: I0123 08:29:42.818299 4860 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 08:29:43 crc kubenswrapper[4860]: I0123 08:29:43.230463 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"45534e9f-cf6f-4610-99bb-c9e656937ee6","Type":"ContainerDied","Data":"bd97bdedd78e2498e3ff26e6edeee608eaded883f40dc339c8be60298332d3ef"} Jan 23 08:29:43 crc kubenswrapper[4860]: I0123 08:29:43.230510 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd97bdedd78e2498e3ff26e6edeee608eaded883f40dc339c8be60298332d3ef" Jan 23 08:29:43 crc kubenswrapper[4860]: I0123 08:29:43.230597 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 08:29:44 crc kubenswrapper[4860]: I0123 08:29:44.347722 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "45534e9f-cf6f-4610-99bb-c9e656937ee6" (UID: "45534e9f-cf6f-4610-99bb-c9e656937ee6"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:29:44 crc kubenswrapper[4860]: I0123 08:29:44.440108 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45534e9f-cf6f-4610-99bb-c9e656937ee6-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.793313 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 23 08:29:47 crc kubenswrapper[4860]: E0123 08:29:47.793867 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98" containerName="extract-content" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.793885 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98" containerName="extract-content" Jan 23 08:29:47 crc kubenswrapper[4860]: E0123 08:29:47.793898 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45534e9f-cf6f-4610-99bb-c9e656937ee6" containerName="git-clone" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.793904 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="45534e9f-cf6f-4610-99bb-c9e656937ee6" containerName="git-clone" Jan 23 08:29:47 crc kubenswrapper[4860]: E0123 08:29:47.793911 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45534e9f-cf6f-4610-99bb-c9e656937ee6" containerName="manage-dockerfile" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.793919 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="45534e9f-cf6f-4610-99bb-c9e656937ee6" containerName="manage-dockerfile" Jan 23 08:29:47 crc kubenswrapper[4860]: E0123 08:29:47.793938 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45534e9f-cf6f-4610-99bb-c9e656937ee6" containerName="docker-build" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.793945 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="45534e9f-cf6f-4610-99bb-c9e656937ee6" containerName="docker-build" Jan 23 08:29:47 crc kubenswrapper[4860]: E0123 08:29:47.793964 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98" containerName="extract-utilities" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.793971 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98" containerName="extract-utilities" Jan 23 08:29:47 crc kubenswrapper[4860]: E0123 08:29:47.793980 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98" containerName="registry-server" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.793987 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98" containerName="registry-server" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.794127 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ec7f8d0-ce9b-478c-bf5c-ea0c081b4d98" containerName="registry-server" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.794141 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="45534e9f-cf6f-4610-99bb-c9e656937ee6" containerName="docker-build" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.795801 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.797302 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-2dlqm" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.797432 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.797710 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.798209 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.813677 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.984191 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.984263 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.984304 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.984338 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.984355 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrc79\" (UniqueName: \"kubernetes.io/projected/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-kube-api-access-vrc79\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.984375 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.984406 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-builder-dockercfg-2dlqm-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.984437 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-builder-dockercfg-2dlqm-push\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.984458 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.984602 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.984665 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:47 crc kubenswrapper[4860]: I0123 08:29:47.984758 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.085776 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.085865 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.085898 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.085929 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.085972 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.086011 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.086055 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrc79\" (UniqueName: \"kubernetes.io/projected/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-kube-api-access-vrc79\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.086059 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.086080 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.086201 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-builder-dockercfg-2dlqm-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.086363 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.086467 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.086557 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.086662 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.086689 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.086801 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-builder-dockercfg-2dlqm-push\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.086831 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.086859 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.086928 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.087108 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.087706 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.091934 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-builder-dockercfg-2dlqm-push\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.098470 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-builder-dockercfg-2dlqm-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.110103 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrc79\" (UniqueName: \"kubernetes.io/projected/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-kube-api-access-vrc79\") pod \"smart-gateway-operator-1-build\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.410076 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:29:48 crc kubenswrapper[4860]: I0123 08:29:48.673768 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 23 08:29:49 crc kubenswrapper[4860]: I0123 08:29:49.271918 4860 generic.go:334] "Generic (PLEG): container finished" podID="cd0b89b2-b5be-4c28-a263-80b1c613c1ce" containerID="b0cb493c0b1bf192b3707218fb8b2aff791b0ebec40d6de0ee5c9d1740508ef2" exitCode=0 Jan 23 08:29:49 crc kubenswrapper[4860]: I0123 08:29:49.271978 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"cd0b89b2-b5be-4c28-a263-80b1c613c1ce","Type":"ContainerDied","Data":"b0cb493c0b1bf192b3707218fb8b2aff791b0ebec40d6de0ee5c9d1740508ef2"} Jan 23 08:29:49 crc kubenswrapper[4860]: I0123 08:29:49.272058 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"cd0b89b2-b5be-4c28-a263-80b1c613c1ce","Type":"ContainerStarted","Data":"77d22fbadcb9fdb17e0fcf0b1fec83f886356abc0cd670f04d867e649cacc206"} Jan 23 08:29:50 crc kubenswrapper[4860]: I0123 08:29:50.282522 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"cd0b89b2-b5be-4c28-a263-80b1c613c1ce","Type":"ContainerStarted","Data":"a70f1c96ffc69497246972fcc4ec720b10e65d242989c6d551b5b471773c8dfe"} Jan 23 08:29:50 crc kubenswrapper[4860]: I0123 08:29:50.306035 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=3.30600105 podStartE2EDuration="3.30600105s" podCreationTimestamp="2026-01-23 08:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:29:50.30515752 +0000 UTC m=+896.933207765" watchObservedRunningTime="2026-01-23 08:29:50.30600105 +0000 UTC m=+896.934051245" Jan 23 08:29:58 crc kubenswrapper[4860]: I0123 08:29:58.828339 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 23 08:29:58 crc kubenswrapper[4860]: I0123 08:29:58.829259 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="cd0b89b2-b5be-4c28-a263-80b1c613c1ce" containerName="docker-build" containerID="cri-o://a70f1c96ffc69497246972fcc4ec720b10e65d242989c6d551b5b471773c8dfe" gracePeriod=30 Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.168515 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps"] Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.169777 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.172346 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.174086 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.181864 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps"] Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.252937 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhmdv\" (UniqueName: \"kubernetes.io/projected/4ce94987-4c86-492f-aba8-32676ef8dd97-kube-api-access-qhmdv\") pod \"collect-profiles-29485950-w9tps\" (UID: \"4ce94987-4c86-492f-aba8-32676ef8dd97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.253000 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ce94987-4c86-492f-aba8-32676ef8dd97-config-volume\") pod \"collect-profiles-29485950-w9tps\" (UID: \"4ce94987-4c86-492f-aba8-32676ef8dd97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.253094 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ce94987-4c86-492f-aba8-32676ef8dd97-secret-volume\") pod \"collect-profiles-29485950-w9tps\" (UID: \"4ce94987-4c86-492f-aba8-32676ef8dd97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.354517 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhmdv\" (UniqueName: \"kubernetes.io/projected/4ce94987-4c86-492f-aba8-32676ef8dd97-kube-api-access-qhmdv\") pod \"collect-profiles-29485950-w9tps\" (UID: \"4ce94987-4c86-492f-aba8-32676ef8dd97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.354579 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ce94987-4c86-492f-aba8-32676ef8dd97-config-volume\") pod \"collect-profiles-29485950-w9tps\" (UID: \"4ce94987-4c86-492f-aba8-32676ef8dd97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.354621 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ce94987-4c86-492f-aba8-32676ef8dd97-secret-volume\") pod \"collect-profiles-29485950-w9tps\" (UID: \"4ce94987-4c86-492f-aba8-32676ef8dd97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.355634 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ce94987-4c86-492f-aba8-32676ef8dd97-config-volume\") pod \"collect-profiles-29485950-w9tps\" (UID: \"4ce94987-4c86-492f-aba8-32676ef8dd97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.363481 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ce94987-4c86-492f-aba8-32676ef8dd97-secret-volume\") pod \"collect-profiles-29485950-w9tps\" (UID: \"4ce94987-4c86-492f-aba8-32676ef8dd97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.371151 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhmdv\" (UniqueName: \"kubernetes.io/projected/4ce94987-4c86-492f-aba8-32676ef8dd97-kube-api-access-qhmdv\") pod \"collect-profiles-29485950-w9tps\" (UID: \"4ce94987-4c86-492f-aba8-32676ef8dd97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.453929 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.455148 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.455857 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/f587f90f-1305-43b3-8caf-1965979ce4da-builder-dockercfg-2dlqm-push\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.455887 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.455912 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.455930 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.456068 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.456156 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhtbr\" (UniqueName: \"kubernetes.io/projected/f587f90f-1305-43b3-8caf-1965979ce4da-kube-api-access-nhtbr\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.456176 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.456194 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.456214 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.456348 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f587f90f-1305-43b3-8caf-1965979ce4da-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.456405 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f587f90f-1305-43b3-8caf-1965979ce4da-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.456461 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/f587f90f-1305-43b3-8caf-1965979ce4da-builder-dockercfg-2dlqm-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.458063 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.458064 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.458311 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.468643 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.489171 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.558392 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.558514 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhtbr\" (UniqueName: \"kubernetes.io/projected/f587f90f-1305-43b3-8caf-1965979ce4da-kube-api-access-nhtbr\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.558547 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.558567 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.558585 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.558636 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f587f90f-1305-43b3-8caf-1965979ce4da-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.558675 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f587f90f-1305-43b3-8caf-1965979ce4da-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.558747 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/f587f90f-1305-43b3-8caf-1965979ce4da-builder-dockercfg-2dlqm-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.558802 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/f587f90f-1305-43b3-8caf-1965979ce4da-builder-dockercfg-2dlqm-push\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.558824 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.558850 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.558871 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.559391 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f587f90f-1305-43b3-8caf-1965979ce4da-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.559610 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f587f90f-1305-43b3-8caf-1965979ce4da-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.559717 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.560164 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.560429 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.560661 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.560681 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.561048 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.561339 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.567708 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/f587f90f-1305-43b3-8caf-1965979ce4da-builder-dockercfg-2dlqm-push\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.569855 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/f587f90f-1305-43b3-8caf-1965979ce4da-builder-dockercfg-2dlqm-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.579527 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhtbr\" (UniqueName: \"kubernetes.io/projected/f587f90f-1305-43b3-8caf-1965979ce4da-kube-api-access-nhtbr\") pod \"smart-gateway-operator-2-build\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.698651 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps"] Jan 23 08:30:00 crc kubenswrapper[4860]: I0123 08:30:00.842954 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.037266 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.356282 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_cd0b89b2-b5be-4c28-a263-80b1c613c1ce/docker-build/0.log" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.356884 4860 generic.go:334] "Generic (PLEG): container finished" podID="cd0b89b2-b5be-4c28-a263-80b1c613c1ce" containerID="a70f1c96ffc69497246972fcc4ec720b10e65d242989c6d551b5b471773c8dfe" exitCode=1 Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.357138 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"cd0b89b2-b5be-4c28-a263-80b1c613c1ce","Type":"ContainerDied","Data":"a70f1c96ffc69497246972fcc4ec720b10e65d242989c6d551b5b471773c8dfe"} Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.358444 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" event={"ID":"4ce94987-4c86-492f-aba8-32676ef8dd97","Type":"ContainerStarted","Data":"098eef84c89e23401f2872144dddd0a0131635ffe8d45b9fe7ff5fd76e47842a"} Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.359721 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"f587f90f-1305-43b3-8caf-1965979ce4da","Type":"ContainerStarted","Data":"ea68bb11d6f78ac5a14a7fed265b3ef8ab67891c5590b6315a89717a5c9a16e4"} Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.827260 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_cd0b89b2-b5be-4c28-a263-80b1c613c1ce/docker-build/0.log" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.828111 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.882458 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-blob-cache\") pod \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.882515 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-container-storage-root\") pod \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.882541 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-container-storage-run\") pod \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.882572 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-buildcachedir\") pod \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.882601 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-builder-dockercfg-2dlqm-pull\") pod \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.882622 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-buildworkdir\") pod \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.882658 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrc79\" (UniqueName: \"kubernetes.io/projected/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-kube-api-access-vrc79\") pod \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.882668 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cd0b89b2-b5be-4c28-a263-80b1c613c1ce" (UID: "cd0b89b2-b5be-4c28-a263-80b1c613c1ce"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.882684 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-system-configs\") pod \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.882724 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-builder-dockercfg-2dlqm-push\") pod \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.882845 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-ca-bundles\") pod \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.882883 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-proxy-ca-bundles\") pod \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.882911 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-node-pullsecrets\") pod \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\" (UID: \"cd0b89b2-b5be-4c28-a263-80b1c613c1ce\") " Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.883180 4860 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.883213 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cd0b89b2-b5be-4c28-a263-80b1c613c1ce" (UID: "cd0b89b2-b5be-4c28-a263-80b1c613c1ce"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.883784 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cd0b89b2-b5be-4c28-a263-80b1c613c1ce" (UID: "cd0b89b2-b5be-4c28-a263-80b1c613c1ce"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.884198 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cd0b89b2-b5be-4c28-a263-80b1c613c1ce" (UID: "cd0b89b2-b5be-4c28-a263-80b1c613c1ce"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.884257 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cd0b89b2-b5be-4c28-a263-80b1c613c1ce" (UID: "cd0b89b2-b5be-4c28-a263-80b1c613c1ce"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.884457 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cd0b89b2-b5be-4c28-a263-80b1c613c1ce" (UID: "cd0b89b2-b5be-4c28-a263-80b1c613c1ce"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.884483 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cd0b89b2-b5be-4c28-a263-80b1c613c1ce" (UID: "cd0b89b2-b5be-4c28-a263-80b1c613c1ce"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.888633 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-builder-dockercfg-2dlqm-pull" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-pull") pod "cd0b89b2-b5be-4c28-a263-80b1c613c1ce" (UID: "cd0b89b2-b5be-4c28-a263-80b1c613c1ce"). InnerVolumeSpecName "builder-dockercfg-2dlqm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.888756 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-kube-api-access-vrc79" (OuterVolumeSpecName: "kube-api-access-vrc79") pod "cd0b89b2-b5be-4c28-a263-80b1c613c1ce" (UID: "cd0b89b2-b5be-4c28-a263-80b1c613c1ce"). InnerVolumeSpecName "kube-api-access-vrc79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.898808 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-builder-dockercfg-2dlqm-push" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-push") pod "cd0b89b2-b5be-4c28-a263-80b1c613c1ce" (UID: "cd0b89b2-b5be-4c28-a263-80b1c613c1ce"). InnerVolumeSpecName "builder-dockercfg-2dlqm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.983958 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-builder-dockercfg-2dlqm-push\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.984002 4860 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.984033 4860 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.984046 4860 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.984059 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.984072 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-builder-dockercfg-2dlqm-pull\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.984083 4860 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.984094 4860 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:01 crc kubenswrapper[4860]: I0123 08:30:01.984105 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrc79\" (UniqueName: \"kubernetes.io/projected/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-kube-api-access-vrc79\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:02 crc kubenswrapper[4860]: I0123 08:30:02.328975 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cd0b89b2-b5be-4c28-a263-80b1c613c1ce" (UID: "cd0b89b2-b5be-4c28-a263-80b1c613c1ce"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:30:02 crc kubenswrapper[4860]: I0123 08:30:02.366741 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" event={"ID":"4ce94987-4c86-492f-aba8-32676ef8dd97","Type":"ContainerStarted","Data":"0f82afe3027c28861b78de34e9991ba55d55433e30e0e5e37d030af1e84a7774"} Jan 23 08:30:02 crc kubenswrapper[4860]: I0123 08:30:02.368128 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"f587f90f-1305-43b3-8caf-1965979ce4da","Type":"ContainerStarted","Data":"1ee6aafd9373e6b5cf4d34c85bc144be90cc31ae788ccb2ac167ec38f6f96d09"} Jan 23 08:30:02 crc kubenswrapper[4860]: I0123 08:30:02.370752 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_cd0b89b2-b5be-4c28-a263-80b1c613c1ce/docker-build/0.log" Jan 23 08:30:02 crc kubenswrapper[4860]: I0123 08:30:02.375908 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"cd0b89b2-b5be-4c28-a263-80b1c613c1ce","Type":"ContainerDied","Data":"77d22fbadcb9fdb17e0fcf0b1fec83f886356abc0cd670f04d867e649cacc206"} Jan 23 08:30:02 crc kubenswrapper[4860]: I0123 08:30:02.375943 4860 scope.go:117] "RemoveContainer" containerID="a70f1c96ffc69497246972fcc4ec720b10e65d242989c6d551b5b471773c8dfe" Jan 23 08:30:02 crc kubenswrapper[4860]: I0123 08:30:02.375969 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 08:30:02 crc kubenswrapper[4860]: I0123 08:30:02.385138 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" podStartSLOduration=2.385121052 podStartE2EDuration="2.385121052s" podCreationTimestamp="2026-01-23 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:30:02.382313004 +0000 UTC m=+909.010363189" watchObservedRunningTime="2026-01-23 08:30:02.385121052 +0000 UTC m=+909.013171237" Jan 23 08:30:02 crc kubenswrapper[4860]: I0123 08:30:02.389764 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:03 crc kubenswrapper[4860]: I0123 08:30:03.383163 4860 generic.go:334] "Generic (PLEG): container finished" podID="4ce94987-4c86-492f-aba8-32676ef8dd97" containerID="0f82afe3027c28861b78de34e9991ba55d55433e30e0e5e37d030af1e84a7774" exitCode=0 Jan 23 08:30:03 crc kubenswrapper[4860]: I0123 08:30:03.383270 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" event={"ID":"4ce94987-4c86-492f-aba8-32676ef8dd97","Type":"ContainerDied","Data":"0f82afe3027c28861b78de34e9991ba55d55433e30e0e5e37d030af1e84a7774"} Jan 23 08:30:04 crc kubenswrapper[4860]: I0123 08:30:04.212334 4860 scope.go:117] "RemoveContainer" containerID="b0cb493c0b1bf192b3707218fb8b2aff791b0ebec40d6de0ee5c9d1740508ef2" Jan 23 08:30:04 crc kubenswrapper[4860]: I0123 08:30:04.684738 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" Jan 23 08:30:04 crc kubenswrapper[4860]: I0123 08:30:04.720196 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ce94987-4c86-492f-aba8-32676ef8dd97-config-volume\") pod \"4ce94987-4c86-492f-aba8-32676ef8dd97\" (UID: \"4ce94987-4c86-492f-aba8-32676ef8dd97\") " Jan 23 08:30:04 crc kubenswrapper[4860]: I0123 08:30:04.720289 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ce94987-4c86-492f-aba8-32676ef8dd97-secret-volume\") pod \"4ce94987-4c86-492f-aba8-32676ef8dd97\" (UID: \"4ce94987-4c86-492f-aba8-32676ef8dd97\") " Jan 23 08:30:04 crc kubenswrapper[4860]: I0123 08:30:04.720411 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhmdv\" (UniqueName: \"kubernetes.io/projected/4ce94987-4c86-492f-aba8-32676ef8dd97-kube-api-access-qhmdv\") pod \"4ce94987-4c86-492f-aba8-32676ef8dd97\" (UID: \"4ce94987-4c86-492f-aba8-32676ef8dd97\") " Jan 23 08:30:04 crc kubenswrapper[4860]: I0123 08:30:04.721187 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce94987-4c86-492f-aba8-32676ef8dd97-config-volume" (OuterVolumeSpecName: "config-volume") pod "4ce94987-4c86-492f-aba8-32676ef8dd97" (UID: "4ce94987-4c86-492f-aba8-32676ef8dd97"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:30:04 crc kubenswrapper[4860]: I0123 08:30:04.724963 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce94987-4c86-492f-aba8-32676ef8dd97-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4ce94987-4c86-492f-aba8-32676ef8dd97" (UID: "4ce94987-4c86-492f-aba8-32676ef8dd97"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:30:04 crc kubenswrapper[4860]: I0123 08:30:04.725240 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ce94987-4c86-492f-aba8-32676ef8dd97-kube-api-access-qhmdv" (OuterVolumeSpecName: "kube-api-access-qhmdv") pod "4ce94987-4c86-492f-aba8-32676ef8dd97" (UID: "4ce94987-4c86-492f-aba8-32676ef8dd97"). InnerVolumeSpecName "kube-api-access-qhmdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:30:04 crc kubenswrapper[4860]: I0123 08:30:04.822135 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ce94987-4c86-492f-aba8-32676ef8dd97-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:04 crc kubenswrapper[4860]: I0123 08:30:04.822170 4860 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4ce94987-4c86-492f-aba8-32676ef8dd97-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:04 crc kubenswrapper[4860]: I0123 08:30:04.822179 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhmdv\" (UniqueName: \"kubernetes.io/projected/4ce94987-4c86-492f-aba8-32676ef8dd97-kube-api-access-qhmdv\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:05 crc kubenswrapper[4860]: I0123 08:30:05.327335 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cd0b89b2-b5be-4c28-a263-80b1c613c1ce" (UID: "cd0b89b2-b5be-4c28-a263-80b1c613c1ce"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:30:05 crc kubenswrapper[4860]: I0123 08:30:05.327974 4860 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cd0b89b2-b5be-4c28-a263-80b1c613c1ce-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:05 crc kubenswrapper[4860]: I0123 08:30:05.398726 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 23 08:30:05 crc kubenswrapper[4860]: I0123 08:30:05.404244 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 23 08:30:05 crc kubenswrapper[4860]: I0123 08:30:05.419869 4860 generic.go:334] "Generic (PLEG): container finished" podID="f587f90f-1305-43b3-8caf-1965979ce4da" containerID="1ee6aafd9373e6b5cf4d34c85bc144be90cc31ae788ccb2ac167ec38f6f96d09" exitCode=0 Jan 23 08:30:05 crc kubenswrapper[4860]: I0123 08:30:05.419941 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"f587f90f-1305-43b3-8caf-1965979ce4da","Type":"ContainerDied","Data":"1ee6aafd9373e6b5cf4d34c85bc144be90cc31ae788ccb2ac167ec38f6f96d09"} Jan 23 08:30:05 crc kubenswrapper[4860]: I0123 08:30:05.421635 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" event={"ID":"4ce94987-4c86-492f-aba8-32676ef8dd97","Type":"ContainerDied","Data":"098eef84c89e23401f2872144dddd0a0131635ffe8d45b9fe7ff5fd76e47842a"} Jan 23 08:30:05 crc kubenswrapper[4860]: I0123 08:30:05.421657 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="098eef84c89e23401f2872144dddd0a0131635ffe8d45b9fe7ff5fd76e47842a" Jan 23 08:30:05 crc kubenswrapper[4860]: I0123 08:30:05.421699 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-w9tps" Jan 23 08:30:05 crc kubenswrapper[4860]: I0123 08:30:05.666929 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd0b89b2-b5be-4c28-a263-80b1c613c1ce" path="/var/lib/kubelet/pods/cd0b89b2-b5be-4c28-a263-80b1c613c1ce/volumes" Jan 23 08:30:06 crc kubenswrapper[4860]: I0123 08:30:06.430520 4860 generic.go:334] "Generic (PLEG): container finished" podID="f587f90f-1305-43b3-8caf-1965979ce4da" containerID="aa2e5df1cedc7f6e69d5c61b2131c4dfca979dc8a06bbadd8bd369d4cbfe4e15" exitCode=0 Jan 23 08:30:06 crc kubenswrapper[4860]: I0123 08:30:06.430552 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"f587f90f-1305-43b3-8caf-1965979ce4da","Type":"ContainerDied","Data":"aa2e5df1cedc7f6e69d5c61b2131c4dfca979dc8a06bbadd8bd369d4cbfe4e15"} Jan 23 08:30:06 crc kubenswrapper[4860]: I0123 08:30:06.465435 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_f587f90f-1305-43b3-8caf-1965979ce4da/manage-dockerfile/0.log" Jan 23 08:30:08 crc kubenswrapper[4860]: I0123 08:30:08.467572 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"f587f90f-1305-43b3-8caf-1965979ce4da","Type":"ContainerStarted","Data":"60bf75894185f5dd9e02f9e97c331773628fb0e2c6f7f593c34d97cc1a4589bd"} Jan 23 08:30:08 crc kubenswrapper[4860]: I0123 08:30:08.516323 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=8.516296868 podStartE2EDuration="8.516296868s" podCreationTimestamp="2026-01-23 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:30:08.510001672 +0000 UTC m=+915.138051917" watchObservedRunningTime="2026-01-23 08:30:08.516296868 +0000 UTC m=+915.144347093" Jan 23 08:31:24 crc kubenswrapper[4860]: I0123 08:31:24.950461 4860 generic.go:334] "Generic (PLEG): container finished" podID="f587f90f-1305-43b3-8caf-1965979ce4da" containerID="60bf75894185f5dd9e02f9e97c331773628fb0e2c6f7f593c34d97cc1a4589bd" exitCode=0 Jan 23 08:31:24 crc kubenswrapper[4860]: I0123 08:31:24.950522 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"f587f90f-1305-43b3-8caf-1965979ce4da","Type":"ContainerDied","Data":"60bf75894185f5dd9e02f9e97c331773628fb0e2c6f7f593c34d97cc1a4589bd"} Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.324116 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.460830 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-buildworkdir\") pod \"f587f90f-1305-43b3-8caf-1965979ce4da\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.460902 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-build-blob-cache\") pod \"f587f90f-1305-43b3-8caf-1965979ce4da\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.460947 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhtbr\" (UniqueName: \"kubernetes.io/projected/f587f90f-1305-43b3-8caf-1965979ce4da-kube-api-access-nhtbr\") pod \"f587f90f-1305-43b3-8caf-1965979ce4da\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.460977 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-proxy-ca-bundles\") pod \"f587f90f-1305-43b3-8caf-1965979ce4da\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.461044 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f587f90f-1305-43b3-8caf-1965979ce4da-buildcachedir\") pod \"f587f90f-1305-43b3-8caf-1965979ce4da\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.461067 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-ca-bundles\") pod \"f587f90f-1305-43b3-8caf-1965979ce4da\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.461094 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-system-configs\") pod \"f587f90f-1305-43b3-8caf-1965979ce4da\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.461117 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-container-storage-root\") pod \"f587f90f-1305-43b3-8caf-1965979ce4da\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.461135 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f587f90f-1305-43b3-8caf-1965979ce4da-node-pullsecrets\") pod \"f587f90f-1305-43b3-8caf-1965979ce4da\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.461163 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/f587f90f-1305-43b3-8caf-1965979ce4da-builder-dockercfg-2dlqm-push\") pod \"f587f90f-1305-43b3-8caf-1965979ce4da\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.461194 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/f587f90f-1305-43b3-8caf-1965979ce4da-builder-dockercfg-2dlqm-pull\") pod \"f587f90f-1305-43b3-8caf-1965979ce4da\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.461222 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-container-storage-run\") pod \"f587f90f-1305-43b3-8caf-1965979ce4da\" (UID: \"f587f90f-1305-43b3-8caf-1965979ce4da\") " Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.461279 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f587f90f-1305-43b3-8caf-1965979ce4da-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f587f90f-1305-43b3-8caf-1965979ce4da" (UID: "f587f90f-1305-43b3-8caf-1965979ce4da"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.461553 4860 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f587f90f-1305-43b3-8caf-1965979ce4da-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.462036 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f587f90f-1305-43b3-8caf-1965979ce4da" (UID: "f587f90f-1305-43b3-8caf-1965979ce4da"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.462057 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f587f90f-1305-43b3-8caf-1965979ce4da" (UID: "f587f90f-1305-43b3-8caf-1965979ce4da"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.462068 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f587f90f-1305-43b3-8caf-1965979ce4da" (UID: "f587f90f-1305-43b3-8caf-1965979ce4da"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.462076 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f587f90f-1305-43b3-8caf-1965979ce4da-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f587f90f-1305-43b3-8caf-1965979ce4da" (UID: "f587f90f-1305-43b3-8caf-1965979ce4da"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.462340 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f587f90f-1305-43b3-8caf-1965979ce4da" (UID: "f587f90f-1305-43b3-8caf-1965979ce4da"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.465888 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f587f90f-1305-43b3-8caf-1965979ce4da" (UID: "f587f90f-1305-43b3-8caf-1965979ce4da"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.466693 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f587f90f-1305-43b3-8caf-1965979ce4da-builder-dockercfg-2dlqm-push" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-push") pod "f587f90f-1305-43b3-8caf-1965979ce4da" (UID: "f587f90f-1305-43b3-8caf-1965979ce4da"). InnerVolumeSpecName "builder-dockercfg-2dlqm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.467173 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f587f90f-1305-43b3-8caf-1965979ce4da-builder-dockercfg-2dlqm-pull" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-pull") pod "f587f90f-1305-43b3-8caf-1965979ce4da" (UID: "f587f90f-1305-43b3-8caf-1965979ce4da"). InnerVolumeSpecName "builder-dockercfg-2dlqm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.472855 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f587f90f-1305-43b3-8caf-1965979ce4da-kube-api-access-nhtbr" (OuterVolumeSpecName: "kube-api-access-nhtbr") pod "f587f90f-1305-43b3-8caf-1965979ce4da" (UID: "f587f90f-1305-43b3-8caf-1965979ce4da"). InnerVolumeSpecName "kube-api-access-nhtbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.563198 4860 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.563235 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhtbr\" (UniqueName: \"kubernetes.io/projected/f587f90f-1305-43b3-8caf-1965979ce4da-kube-api-access-nhtbr\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.563246 4860 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.563255 4860 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f587f90f-1305-43b3-8caf-1965979ce4da-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.563264 4860 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.563272 4860 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f587f90f-1305-43b3-8caf-1965979ce4da-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.563280 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/f587f90f-1305-43b3-8caf-1965979ce4da-builder-dockercfg-2dlqm-push\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.563290 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/f587f90f-1305-43b3-8caf-1965979ce4da-builder-dockercfg-2dlqm-pull\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.563300 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.656926 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f587f90f-1305-43b3-8caf-1965979ce4da" (UID: "f587f90f-1305-43b3-8caf-1965979ce4da"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.664122 4860 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.775435 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:31:26 crc kubenswrapper[4860]: I0123 08:31:26.775719 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:31:27 crc kubenswrapper[4860]: I0123 08:31:27.068277 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"f587f90f-1305-43b3-8caf-1965979ce4da","Type":"ContainerDied","Data":"ea68bb11d6f78ac5a14a7fed265b3ef8ab67891c5590b6315a89717a5c9a16e4"} Jan 23 08:31:27 crc kubenswrapper[4860]: I0123 08:31:27.068821 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea68bb11d6f78ac5a14a7fed265b3ef8ab67891c5590b6315a89717a5c9a16e4" Jan 23 08:31:27 crc kubenswrapper[4860]: I0123 08:31:27.068766 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 08:31:28 crc kubenswrapper[4860]: I0123 08:31:28.288430 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f587f90f-1305-43b3-8caf-1965979ce4da" (UID: "f587f90f-1305-43b3-8caf-1965979ce4da"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:31:28 crc kubenswrapper[4860]: I0123 08:31:28.384722 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f587f90f-1305-43b3-8caf-1965979ce4da-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:31 crc kubenswrapper[4860]: I0123 08:31:31.992707 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 23 08:31:31 crc kubenswrapper[4860]: E0123 08:31:31.993364 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f587f90f-1305-43b3-8caf-1965979ce4da" containerName="manage-dockerfile" Jan 23 08:31:31 crc kubenswrapper[4860]: I0123 08:31:31.993393 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f587f90f-1305-43b3-8caf-1965979ce4da" containerName="manage-dockerfile" Jan 23 08:31:31 crc kubenswrapper[4860]: E0123 08:31:31.993409 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f587f90f-1305-43b3-8caf-1965979ce4da" containerName="git-clone" Jan 23 08:31:31 crc kubenswrapper[4860]: I0123 08:31:31.993416 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f587f90f-1305-43b3-8caf-1965979ce4da" containerName="git-clone" Jan 23 08:31:31 crc kubenswrapper[4860]: E0123 08:31:31.993425 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ce94987-4c86-492f-aba8-32676ef8dd97" containerName="collect-profiles" Jan 23 08:31:31 crc kubenswrapper[4860]: I0123 08:31:31.993431 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce94987-4c86-492f-aba8-32676ef8dd97" containerName="collect-profiles" Jan 23 08:31:31 crc kubenswrapper[4860]: E0123 08:31:31.993444 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f587f90f-1305-43b3-8caf-1965979ce4da" containerName="docker-build" Jan 23 08:31:31 crc kubenswrapper[4860]: I0123 08:31:31.993451 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="f587f90f-1305-43b3-8caf-1965979ce4da" containerName="docker-build" Jan 23 08:31:31 crc kubenswrapper[4860]: E0123 08:31:31.993463 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0b89b2-b5be-4c28-a263-80b1c613c1ce" containerName="docker-build" Jan 23 08:31:31 crc kubenswrapper[4860]: I0123 08:31:31.993469 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0b89b2-b5be-4c28-a263-80b1c613c1ce" containerName="docker-build" Jan 23 08:31:31 crc kubenswrapper[4860]: E0123 08:31:31.993481 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0b89b2-b5be-4c28-a263-80b1c613c1ce" containerName="manage-dockerfile" Jan 23 08:31:31 crc kubenswrapper[4860]: I0123 08:31:31.993488 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0b89b2-b5be-4c28-a263-80b1c613c1ce" containerName="manage-dockerfile" Jan 23 08:31:31 crc kubenswrapper[4860]: I0123 08:31:31.993627 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0b89b2-b5be-4c28-a263-80b1c613c1ce" containerName="docker-build" Jan 23 08:31:31 crc kubenswrapper[4860]: I0123 08:31:31.993649 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="f587f90f-1305-43b3-8caf-1965979ce4da" containerName="docker-build" Jan 23 08:31:31 crc kubenswrapper[4860]: I0123 08:31:31.993663 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ce94987-4c86-492f-aba8-32676ef8dd97" containerName="collect-profiles" Jan 23 08:31:31 crc kubenswrapper[4860]: I0123 08:31:31.994424 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 23 08:31:31 crc kubenswrapper[4860]: I0123 08:31:31.996954 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Jan 23 08:31:31 crc kubenswrapper[4860]: I0123 08:31:31.997321 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-2dlqm" Jan 23 08:31:31 crc kubenswrapper[4860]: I0123 08:31:31.997550 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.001505 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.008830 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.028551 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-buildcachedir\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.028611 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-system-configs\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.028632 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.028649 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-builder-dockercfg-2dlqm-pull\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.028671 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-buildworkdir\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.028685 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-builder-dockercfg-2dlqm-push\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.028706 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.028724 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-container-storage-run\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.028754 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr4q4\" (UniqueName: \"kubernetes.io/projected/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-kube-api-access-zr4q4\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.028778 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.028796 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-container-storage-root\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.028810 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.129694 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-buildcachedir\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.129996 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-system-configs\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.129796 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-buildcachedir\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.130035 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.130114 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.130123 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-builder-dockercfg-2dlqm-pull\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.130185 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-buildworkdir\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.130212 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-builder-dockercfg-2dlqm-push\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.130242 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.130270 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-container-storage-run\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.130313 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr4q4\" (UniqueName: \"kubernetes.io/projected/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-kube-api-access-zr4q4\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.130344 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.130370 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-container-storage-root\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.130395 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.130811 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-system-configs\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.130821 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-container-storage-root\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.131129 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.131189 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-buildworkdir\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.131242 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.131686 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.132043 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-container-storage-run\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.147938 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-builder-dockercfg-2dlqm-push\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.147938 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-builder-dockercfg-2dlqm-pull\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.154452 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr4q4\" (UniqueName: \"kubernetes.io/projected/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-kube-api-access-zr4q4\") pod \"sg-core-1-build\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.307998 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 23 08:31:32 crc kubenswrapper[4860]: I0123 08:31:32.703300 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 23 08:31:33 crc kubenswrapper[4860]: I0123 08:31:33.113875 4860 generic.go:334] "Generic (PLEG): container finished" podID="93e34df8-e3a5-4a5c-87e6-ad08ea17c084" containerID="479ce501e51aad740c4880080e65afad2986fefb847f45707b9d18dbecce2f5f" exitCode=0 Jan 23 08:31:33 crc kubenswrapper[4860]: I0123 08:31:33.113925 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"93e34df8-e3a5-4a5c-87e6-ad08ea17c084","Type":"ContainerDied","Data":"479ce501e51aad740c4880080e65afad2986fefb847f45707b9d18dbecce2f5f"} Jan 23 08:31:33 crc kubenswrapper[4860]: I0123 08:31:33.113955 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"93e34df8-e3a5-4a5c-87e6-ad08ea17c084","Type":"ContainerStarted","Data":"4de739b182f97e5fee2e2a66ecfb89c24460e3e360b3b9780e8eaa3403a78b05"} Jan 23 08:31:34 crc kubenswrapper[4860]: I0123 08:31:34.123122 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"93e34df8-e3a5-4a5c-87e6-ad08ea17c084","Type":"ContainerStarted","Data":"458b341377c3fdb7e071ac7c4ee906ee89ec384f93124826db51d6c82c860ec0"} Jan 23 08:31:35 crc kubenswrapper[4860]: I0123 08:31:35.161412 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=4.161384342 podStartE2EDuration="4.161384342s" podCreationTimestamp="2026-01-23 08:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:31:35.153105881 +0000 UTC m=+1001.781156106" watchObservedRunningTime="2026-01-23 08:31:35.161384342 +0000 UTC m=+1001.789434557" Jan 23 08:31:42 crc kubenswrapper[4860]: I0123 08:31:42.396960 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 23 08:31:42 crc kubenswrapper[4860]: I0123 08:31:42.397828 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="93e34df8-e3a5-4a5c-87e6-ad08ea17c084" containerName="docker-build" containerID="cri-o://458b341377c3fdb7e071ac7c4ee906ee89ec384f93124826db51d6c82c860ec0" gracePeriod=30 Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.200081 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_93e34df8-e3a5-4a5c-87e6-ad08ea17c084/docker-build/0.log" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.200944 4860 generic.go:334] "Generic (PLEG): container finished" podID="93e34df8-e3a5-4a5c-87e6-ad08ea17c084" containerID="458b341377c3fdb7e071ac7c4ee906ee89ec384f93124826db51d6c82c860ec0" exitCode=1 Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.201057 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"93e34df8-e3a5-4a5c-87e6-ad08ea17c084","Type":"ContainerDied","Data":"458b341377c3fdb7e071ac7c4ee906ee89ec384f93124826db51d6c82c860ec0"} Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.470964 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.472371 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.474323 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.474940 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.475065 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.489431 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.592035 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxl5f\" (UniqueName: \"kubernetes.io/projected/2d9e84b4-2516-4ec0-999b-523183e8b2f7-kube-api-access-zxl5f\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.592086 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-container-storage-root\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.592123 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.592160 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/2d9e84b4-2516-4ec0-999b-523183e8b2f7-builder-dockercfg-2dlqm-push\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.592176 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2d9e84b4-2516-4ec0-999b-523183e8b2f7-buildcachedir\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.592260 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.592340 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/2d9e84b4-2516-4ec0-999b-523183e8b2f7-builder-dockercfg-2dlqm-pull\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.592383 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2d9e84b4-2516-4ec0-999b-523183e8b2f7-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.592417 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-container-storage-run\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.592492 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-system-configs\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.592553 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.592619 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-buildworkdir\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.694125 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-system-configs\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.694171 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.694204 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-buildworkdir\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.694238 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxl5f\" (UniqueName: \"kubernetes.io/projected/2d9e84b4-2516-4ec0-999b-523183e8b2f7-kube-api-access-zxl5f\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.694256 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-container-storage-root\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.694280 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.694403 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/2d9e84b4-2516-4ec0-999b-523183e8b2f7-builder-dockercfg-2dlqm-push\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.694468 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2d9e84b4-2516-4ec0-999b-523183e8b2f7-buildcachedir\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.694511 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.694530 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/2d9e84b4-2516-4ec0-999b-523183e8b2f7-builder-dockercfg-2dlqm-pull\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.694546 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2d9e84b4-2516-4ec0-999b-523183e8b2f7-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.694565 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-container-storage-run\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.694912 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-container-storage-run\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.695383 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-system-configs\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.696079 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.696299 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-buildworkdir\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.697072 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2d9e84b4-2516-4ec0-999b-523183e8b2f7-buildcachedir\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.697116 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-container-storage-root\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.697470 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.697666 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.698339 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2d9e84b4-2516-4ec0-999b-523183e8b2f7-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.703331 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/2d9e84b4-2516-4ec0-999b-523183e8b2f7-builder-dockercfg-2dlqm-push\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.707356 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/2d9e84b4-2516-4ec0-999b-523183e8b2f7-builder-dockercfg-2dlqm-pull\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.713748 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxl5f\" (UniqueName: \"kubernetes.io/projected/2d9e84b4-2516-4ec0-999b-523183e8b2f7-kube-api-access-zxl5f\") pod \"sg-core-2-build\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.747986 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_93e34df8-e3a5-4a5c-87e6-ad08ea17c084/docker-build/0.log" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.748548 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.799921 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.898036 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-container-storage-root\") pod \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.898468 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-blob-cache\") pod \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.898498 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr4q4\" (UniqueName: \"kubernetes.io/projected/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-kube-api-access-zr4q4\") pod \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.898522 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-buildworkdir\") pod \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.898602 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-node-pullsecrets\") pod \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.898672 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-system-configs\") pod \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.898714 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-builder-dockercfg-2dlqm-pull\") pod \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.898743 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-buildcachedir\") pod \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.898766 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-proxy-ca-bundles\") pod \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.898791 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-ca-bundles\") pod \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.898824 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-builder-dockercfg-2dlqm-push\") pod \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.898856 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-container-storage-run\") pod \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\" (UID: \"93e34df8-e3a5-4a5c-87e6-ad08ea17c084\") " Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.900496 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "93e34df8-e3a5-4a5c-87e6-ad08ea17c084" (UID: "93e34df8-e3a5-4a5c-87e6-ad08ea17c084"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.902547 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "93e34df8-e3a5-4a5c-87e6-ad08ea17c084" (UID: "93e34df8-e3a5-4a5c-87e6-ad08ea17c084"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.902727 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "93e34df8-e3a5-4a5c-87e6-ad08ea17c084" (UID: "93e34df8-e3a5-4a5c-87e6-ad08ea17c084"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.902819 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "93e34df8-e3a5-4a5c-87e6-ad08ea17c084" (UID: "93e34df8-e3a5-4a5c-87e6-ad08ea17c084"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.902864 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "93e34df8-e3a5-4a5c-87e6-ad08ea17c084" (UID: "93e34df8-e3a5-4a5c-87e6-ad08ea17c084"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.902918 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "93e34df8-e3a5-4a5c-87e6-ad08ea17c084" (UID: "93e34df8-e3a5-4a5c-87e6-ad08ea17c084"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.907273 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-builder-dockercfg-2dlqm-push" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-push") pod "93e34df8-e3a5-4a5c-87e6-ad08ea17c084" (UID: "93e34df8-e3a5-4a5c-87e6-ad08ea17c084"). InnerVolumeSpecName "builder-dockercfg-2dlqm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.907342 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-builder-dockercfg-2dlqm-pull" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-pull") pod "93e34df8-e3a5-4a5c-87e6-ad08ea17c084" (UID: "93e34df8-e3a5-4a5c-87e6-ad08ea17c084"). InnerVolumeSpecName "builder-dockercfg-2dlqm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.911858 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-kube-api-access-zr4q4" (OuterVolumeSpecName: "kube-api-access-zr4q4") pod "93e34df8-e3a5-4a5c-87e6-ad08ea17c084" (UID: "93e34df8-e3a5-4a5c-87e6-ad08ea17c084"). InnerVolumeSpecName "kube-api-access-zr4q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.919196 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "93e34df8-e3a5-4a5c-87e6-ad08ea17c084" (UID: "93e34df8-e3a5-4a5c-87e6-ad08ea17c084"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.993843 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.999912 4860 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:44 crc kubenswrapper[4860]: I0123 08:31:44.999945 4860 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:44.999957 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-builder-dockercfg-2dlqm-pull\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:44.999970 4860 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:44.999981 4860 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:44.999991 4860 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.000000 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-builder-dockercfg-2dlqm-push\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.000011 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.000048 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr4q4\" (UniqueName: \"kubernetes.io/projected/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-kube-api-access-zr4q4\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.000061 4860 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.015438 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "93e34df8-e3a5-4a5c-87e6-ad08ea17c084" (UID: "93e34df8-e3a5-4a5c-87e6-ad08ea17c084"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.031875 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "93e34df8-e3a5-4a5c-87e6-ad08ea17c084" (UID: "93e34df8-e3a5-4a5c-87e6-ad08ea17c084"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.101560 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.101610 4860 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/93e34df8-e3a5-4a5c-87e6-ad08ea17c084-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.212302 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_93e34df8-e3a5-4a5c-87e6-ad08ea17c084/docker-build/0.log" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.212786 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"93e34df8-e3a5-4a5c-87e6-ad08ea17c084","Type":"ContainerDied","Data":"4de739b182f97e5fee2e2a66ecfb89c24460e3e360b3b9780e8eaa3403a78b05"} Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.212821 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.212829 4860 scope.go:117] "RemoveContainer" containerID="458b341377c3fdb7e071ac7c4ee906ee89ec384f93124826db51d6c82c860ec0" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.214493 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"2d9e84b4-2516-4ec0-999b-523183e8b2f7","Type":"ContainerStarted","Data":"7a3ae223cc898348ac9944af87fdfc43b4ded07e52069b9320b352a94c86dc65"} Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.247892 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.255749 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.269741 4860 scope.go:117] "RemoveContainer" containerID="479ce501e51aad740c4880080e65afad2986fefb847f45707b9d18dbecce2f5f" Jan 23 08:31:45 crc kubenswrapper[4860]: I0123 08:31:45.665711 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93e34df8-e3a5-4a5c-87e6-ad08ea17c084" path="/var/lib/kubelet/pods/93e34df8-e3a5-4a5c-87e6-ad08ea17c084/volumes" Jan 23 08:31:46 crc kubenswrapper[4860]: I0123 08:31:46.222711 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"2d9e84b4-2516-4ec0-999b-523183e8b2f7","Type":"ContainerStarted","Data":"9c36dd706ba5c9197d3f4c4dc131e6b324a262c0c04753cec6caa09b63630d37"} Jan 23 08:31:47 crc kubenswrapper[4860]: I0123 08:31:47.231444 4860 generic.go:334] "Generic (PLEG): container finished" podID="2d9e84b4-2516-4ec0-999b-523183e8b2f7" containerID="9c36dd706ba5c9197d3f4c4dc131e6b324a262c0c04753cec6caa09b63630d37" exitCode=0 Jan 23 08:31:47 crc kubenswrapper[4860]: I0123 08:31:47.231518 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"2d9e84b4-2516-4ec0-999b-523183e8b2f7","Type":"ContainerDied","Data":"9c36dd706ba5c9197d3f4c4dc131e6b324a262c0c04753cec6caa09b63630d37"} Jan 23 08:31:48 crc kubenswrapper[4860]: I0123 08:31:48.239820 4860 generic.go:334] "Generic (PLEG): container finished" podID="2d9e84b4-2516-4ec0-999b-523183e8b2f7" containerID="416a86db9d7985288dbb98fd4bf526cb814dc50ed6889db01527d2337bb4e9bc" exitCode=0 Jan 23 08:31:48 crc kubenswrapper[4860]: I0123 08:31:48.239904 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"2d9e84b4-2516-4ec0-999b-523183e8b2f7","Type":"ContainerDied","Data":"416a86db9d7985288dbb98fd4bf526cb814dc50ed6889db01527d2337bb4e9bc"} Jan 23 08:31:48 crc kubenswrapper[4860]: I0123 08:31:48.269326 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_2d9e84b4-2516-4ec0-999b-523183e8b2f7/manage-dockerfile/0.log" Jan 23 08:31:49 crc kubenswrapper[4860]: I0123 08:31:49.248350 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"2d9e84b4-2516-4ec0-999b-523183e8b2f7","Type":"ContainerStarted","Data":"21a74d9d4bb5ec9324dc9596b22630ec5735b4e62d673898f5a227d3fe3042e2"} Jan 23 08:31:49 crc kubenswrapper[4860]: I0123 08:31:49.274360 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.274340453 podStartE2EDuration="5.274340453s" podCreationTimestamp="2026-01-23 08:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:31:49.270471439 +0000 UTC m=+1015.898521654" watchObservedRunningTime="2026-01-23 08:31:49.274340453 +0000 UTC m=+1015.902390648" Jan 23 08:31:56 crc kubenswrapper[4860]: I0123 08:31:56.775644 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:31:56 crc kubenswrapper[4860]: I0123 08:31:56.776381 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:32:26 crc kubenswrapper[4860]: I0123 08:32:26.775389 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:32:26 crc kubenswrapper[4860]: I0123 08:32:26.775869 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:32:26 crc kubenswrapper[4860]: I0123 08:32:26.775927 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:32:26 crc kubenswrapper[4860]: I0123 08:32:26.776780 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55ddb61fc36e2367c62e594d1b3c5e8526e9c5309c997e078accc938d2704751"} pod="openshift-machine-config-operator/machine-config-daemon-tk8df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:32:26 crc kubenswrapper[4860]: I0123 08:32:26.776871 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" containerID="cri-o://55ddb61fc36e2367c62e594d1b3c5e8526e9c5309c997e078accc938d2704751" gracePeriod=600 Jan 23 08:32:28 crc kubenswrapper[4860]: I0123 08:32:28.485469 4860 generic.go:334] "Generic (PLEG): container finished" podID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerID="55ddb61fc36e2367c62e594d1b3c5e8526e9c5309c997e078accc938d2704751" exitCode=0 Jan 23 08:32:28 crc kubenswrapper[4860]: I0123 08:32:28.485521 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerDied","Data":"55ddb61fc36e2367c62e594d1b3c5e8526e9c5309c997e078accc938d2704751"} Jan 23 08:32:28 crc kubenswrapper[4860]: I0123 08:32:28.485834 4860 scope.go:117] "RemoveContainer" containerID="52a615d6aca47e73053df92f20f2d23afd3eb30795f9436f06952408ac8ca4f6" Jan 23 08:32:29 crc kubenswrapper[4860]: I0123 08:32:29.494504 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerStarted","Data":"0c5e138e24069ddd965b8c6e8f41ef692b2a1185591867905ae8eb8f0de42c68"} Jan 23 08:34:56 crc kubenswrapper[4860]: I0123 08:34:56.777228 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:34:56 crc kubenswrapper[4860]: I0123 08:34:56.777599 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:35:26 crc kubenswrapper[4860]: I0123 08:35:26.775375 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:35:26 crc kubenswrapper[4860]: I0123 08:35:26.775961 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:35:54 crc kubenswrapper[4860]: I0123 08:35:54.758995 4860 generic.go:334] "Generic (PLEG): container finished" podID="2d9e84b4-2516-4ec0-999b-523183e8b2f7" containerID="21a74d9d4bb5ec9324dc9596b22630ec5735b4e62d673898f5a227d3fe3042e2" exitCode=0 Jan 23 08:35:54 crc kubenswrapper[4860]: I0123 08:35:54.759117 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"2d9e84b4-2516-4ec0-999b-523183e8b2f7","Type":"ContainerDied","Data":"21a74d9d4bb5ec9324dc9596b22630ec5735b4e62d673898f5a227d3fe3042e2"} Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.009228 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.147185 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-blob-cache\") pod \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.147267 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/2d9e84b4-2516-4ec0-999b-523183e8b2f7-builder-dockercfg-2dlqm-push\") pod \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.147487 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d9e84b4-2516-4ec0-999b-523183e8b2f7-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "2d9e84b4-2516-4ec0-999b-523183e8b2f7" (UID: "2d9e84b4-2516-4ec0-999b-523183e8b2f7"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.147288 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2d9e84b4-2516-4ec0-999b-523183e8b2f7-node-pullsecrets\") pod \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.148439 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/2d9e84b4-2516-4ec0-999b-523183e8b2f7-builder-dockercfg-2dlqm-pull\") pod \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.148493 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-system-configs\") pod \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.148516 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-proxy-ca-bundles\") pod \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.148566 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-container-storage-root\") pod \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.148610 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-ca-bundles\") pod \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.148644 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-container-storage-run\") pod \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.148713 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxl5f\" (UniqueName: \"kubernetes.io/projected/2d9e84b4-2516-4ec0-999b-523183e8b2f7-kube-api-access-zxl5f\") pod \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.148743 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2d9e84b4-2516-4ec0-999b-523183e8b2f7-buildcachedir\") pod \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.148766 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-buildworkdir\") pod \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\" (UID: \"2d9e84b4-2516-4ec0-999b-523183e8b2f7\") " Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.149065 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d9e84b4-2516-4ec0-999b-523183e8b2f7-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "2d9e84b4-2516-4ec0-999b-523183e8b2f7" (UID: "2d9e84b4-2516-4ec0-999b-523183e8b2f7"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.149289 4860 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2d9e84b4-2516-4ec0-999b-523183e8b2f7-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.149315 4860 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2d9e84b4-2516-4ec0-999b-523183e8b2f7-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.149376 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "2d9e84b4-2516-4ec0-999b-523183e8b2f7" (UID: "2d9e84b4-2516-4ec0-999b-523183e8b2f7"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.151373 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "2d9e84b4-2516-4ec0-999b-523183e8b2f7" (UID: "2d9e84b4-2516-4ec0-999b-523183e8b2f7"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.152380 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "2d9e84b4-2516-4ec0-999b-523183e8b2f7" (UID: "2d9e84b4-2516-4ec0-999b-523183e8b2f7"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.152944 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "2d9e84b4-2516-4ec0-999b-523183e8b2f7" (UID: "2d9e84b4-2516-4ec0-999b-523183e8b2f7"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.153704 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d9e84b4-2516-4ec0-999b-523183e8b2f7-builder-dockercfg-2dlqm-push" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-push") pod "2d9e84b4-2516-4ec0-999b-523183e8b2f7" (UID: "2d9e84b4-2516-4ec0-999b-523183e8b2f7"). InnerVolumeSpecName "builder-dockercfg-2dlqm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.154874 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d9e84b4-2516-4ec0-999b-523183e8b2f7-builder-dockercfg-2dlqm-pull" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-pull") pod "2d9e84b4-2516-4ec0-999b-523183e8b2f7" (UID: "2d9e84b4-2516-4ec0-999b-523183e8b2f7"). InnerVolumeSpecName "builder-dockercfg-2dlqm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.156000 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d9e84b4-2516-4ec0-999b-523183e8b2f7-kube-api-access-zxl5f" (OuterVolumeSpecName: "kube-api-access-zxl5f") pod "2d9e84b4-2516-4ec0-999b-523183e8b2f7" (UID: "2d9e84b4-2516-4ec0-999b-523183e8b2f7"). InnerVolumeSpecName "kube-api-access-zxl5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.166463 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "2d9e84b4-2516-4ec0-999b-523183e8b2f7" (UID: "2d9e84b4-2516-4ec0-999b-523183e8b2f7"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.256816 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxl5f\" (UniqueName: \"kubernetes.io/projected/2d9e84b4-2516-4ec0-999b-523183e8b2f7-kube-api-access-zxl5f\") on node \"crc\" DevicePath \"\"" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.257271 4860 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.257290 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/2d9e84b4-2516-4ec0-999b-523183e8b2f7-builder-dockercfg-2dlqm-push\") on node \"crc\" DevicePath \"\"" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.257306 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/2d9e84b4-2516-4ec0-999b-523183e8b2f7-builder-dockercfg-2dlqm-pull\") on node \"crc\" DevicePath \"\"" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.257318 4860 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.257330 4860 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.257343 4860 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.257355 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.270276 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wdc2f"] Jan 23 08:35:56 crc kubenswrapper[4860]: E0123 08:35:56.270513 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d9e84b4-2516-4ec0-999b-523183e8b2f7" containerName="manage-dockerfile" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.270525 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d9e84b4-2516-4ec0-999b-523183e8b2f7" containerName="manage-dockerfile" Jan 23 08:35:56 crc kubenswrapper[4860]: E0123 08:35:56.270539 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e34df8-e3a5-4a5c-87e6-ad08ea17c084" containerName="docker-build" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.270546 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e34df8-e3a5-4a5c-87e6-ad08ea17c084" containerName="docker-build" Jan 23 08:35:56 crc kubenswrapper[4860]: E0123 08:35:56.270557 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d9e84b4-2516-4ec0-999b-523183e8b2f7" containerName="git-clone" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.270563 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d9e84b4-2516-4ec0-999b-523183e8b2f7" containerName="git-clone" Jan 23 08:35:56 crc kubenswrapper[4860]: E0123 08:35:56.270578 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d9e84b4-2516-4ec0-999b-523183e8b2f7" containerName="docker-build" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.270584 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d9e84b4-2516-4ec0-999b-523183e8b2f7" containerName="docker-build" Jan 23 08:35:56 crc kubenswrapper[4860]: E0123 08:35:56.270590 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e34df8-e3a5-4a5c-87e6-ad08ea17c084" containerName="manage-dockerfile" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.270596 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e34df8-e3a5-4a5c-87e6-ad08ea17c084" containerName="manage-dockerfile" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.270755 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e34df8-e3a5-4a5c-87e6-ad08ea17c084" containerName="docker-build" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.270768 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d9e84b4-2516-4ec0-999b-523183e8b2f7" containerName="docker-build" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.271546 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.284073 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wdc2f"] Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.360721 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp2vp\" (UniqueName: \"kubernetes.io/projected/8bf6197e-42ed-4f0b-8a07-93c32813ca90-kube-api-access-kp2vp\") pod \"certified-operators-wdc2f\" (UID: \"8bf6197e-42ed-4f0b-8a07-93c32813ca90\") " pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.360837 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf6197e-42ed-4f0b-8a07-93c32813ca90-catalog-content\") pod \"certified-operators-wdc2f\" (UID: \"8bf6197e-42ed-4f0b-8a07-93c32813ca90\") " pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.360942 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf6197e-42ed-4f0b-8a07-93c32813ca90-utilities\") pod \"certified-operators-wdc2f\" (UID: \"8bf6197e-42ed-4f0b-8a07-93c32813ca90\") " pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.462283 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf6197e-42ed-4f0b-8a07-93c32813ca90-catalog-content\") pod \"certified-operators-wdc2f\" (UID: \"8bf6197e-42ed-4f0b-8a07-93c32813ca90\") " pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.462344 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf6197e-42ed-4f0b-8a07-93c32813ca90-utilities\") pod \"certified-operators-wdc2f\" (UID: \"8bf6197e-42ed-4f0b-8a07-93c32813ca90\") " pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.462414 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp2vp\" (UniqueName: \"kubernetes.io/projected/8bf6197e-42ed-4f0b-8a07-93c32813ca90-kube-api-access-kp2vp\") pod \"certified-operators-wdc2f\" (UID: \"8bf6197e-42ed-4f0b-8a07-93c32813ca90\") " pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.463367 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf6197e-42ed-4f0b-8a07-93c32813ca90-catalog-content\") pod \"certified-operators-wdc2f\" (UID: \"8bf6197e-42ed-4f0b-8a07-93c32813ca90\") " pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.463669 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf6197e-42ed-4f0b-8a07-93c32813ca90-utilities\") pod \"certified-operators-wdc2f\" (UID: \"8bf6197e-42ed-4f0b-8a07-93c32813ca90\") " pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.482455 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp2vp\" (UniqueName: \"kubernetes.io/projected/8bf6197e-42ed-4f0b-8a07-93c32813ca90-kube-api-access-kp2vp\") pod \"certified-operators-wdc2f\" (UID: \"8bf6197e-42ed-4f0b-8a07-93c32813ca90\") " pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.565068 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "2d9e84b4-2516-4ec0-999b-523183e8b2f7" (UID: "2d9e84b4-2516-4ec0-999b-523183e8b2f7"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.602512 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.667713 4860 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.775466 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.775514 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.775555 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.776168 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c5e138e24069ddd965b8c6e8f41ef692b2a1185591867905ae8eb8f0de42c68"} pod="openshift-machine-config-operator/machine-config-daemon-tk8df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.776229 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" containerID="cri-o://0c5e138e24069ddd965b8c6e8f41ef692b2a1185591867905ae8eb8f0de42c68" gracePeriod=600 Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.793000 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"2d9e84b4-2516-4ec0-999b-523183e8b2f7","Type":"ContainerDied","Data":"7a3ae223cc898348ac9944af87fdfc43b4ded07e52069b9320b352a94c86dc65"} Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.793053 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a3ae223cc898348ac9944af87fdfc43b4ded07e52069b9320b352a94c86dc65" Jan 23 08:35:56 crc kubenswrapper[4860]: I0123 08:35:56.793139 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 23 08:35:57 crc kubenswrapper[4860]: I0123 08:35:57.066889 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wdc2f"] Jan 23 08:35:57 crc kubenswrapper[4860]: I0123 08:35:57.799307 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdc2f" event={"ID":"8bf6197e-42ed-4f0b-8a07-93c32813ca90","Type":"ContainerStarted","Data":"542a1689e9ef8d9ac299a3077fc401730ed2f8e522f676570600ad736f63a042"} Jan 23 08:35:58 crc kubenswrapper[4860]: I0123 08:35:58.459408 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fnsmx"] Jan 23 08:35:58 crc kubenswrapper[4860]: I0123 08:35:58.460988 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:35:58 crc kubenswrapper[4860]: I0123 08:35:58.492900 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fnsmx"] Jan 23 08:35:58 crc kubenswrapper[4860]: I0123 08:35:58.592134 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/424fb122-47dd-4e58-9711-ee6be12a7040-utilities\") pod \"redhat-operators-fnsmx\" (UID: \"424fb122-47dd-4e58-9711-ee6be12a7040\") " pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:35:58 crc kubenswrapper[4860]: I0123 08:35:58.592211 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/424fb122-47dd-4e58-9711-ee6be12a7040-catalog-content\") pod \"redhat-operators-fnsmx\" (UID: \"424fb122-47dd-4e58-9711-ee6be12a7040\") " pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:35:58 crc kubenswrapper[4860]: I0123 08:35:58.592238 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlm5n\" (UniqueName: \"kubernetes.io/projected/424fb122-47dd-4e58-9711-ee6be12a7040-kube-api-access-qlm5n\") pod \"redhat-operators-fnsmx\" (UID: \"424fb122-47dd-4e58-9711-ee6be12a7040\") " pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:35:58 crc kubenswrapper[4860]: I0123 08:35:58.693946 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/424fb122-47dd-4e58-9711-ee6be12a7040-utilities\") pod \"redhat-operators-fnsmx\" (UID: \"424fb122-47dd-4e58-9711-ee6be12a7040\") " pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:35:58 crc kubenswrapper[4860]: I0123 08:35:58.694257 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/424fb122-47dd-4e58-9711-ee6be12a7040-catalog-content\") pod \"redhat-operators-fnsmx\" (UID: \"424fb122-47dd-4e58-9711-ee6be12a7040\") " pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:35:58 crc kubenswrapper[4860]: I0123 08:35:58.694349 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlm5n\" (UniqueName: \"kubernetes.io/projected/424fb122-47dd-4e58-9711-ee6be12a7040-kube-api-access-qlm5n\") pod \"redhat-operators-fnsmx\" (UID: \"424fb122-47dd-4e58-9711-ee6be12a7040\") " pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:35:58 crc kubenswrapper[4860]: I0123 08:35:58.695093 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/424fb122-47dd-4e58-9711-ee6be12a7040-utilities\") pod \"redhat-operators-fnsmx\" (UID: \"424fb122-47dd-4e58-9711-ee6be12a7040\") " pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:35:58 crc kubenswrapper[4860]: I0123 08:35:58.696681 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/424fb122-47dd-4e58-9711-ee6be12a7040-catalog-content\") pod \"redhat-operators-fnsmx\" (UID: \"424fb122-47dd-4e58-9711-ee6be12a7040\") " pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:35:58 crc kubenswrapper[4860]: I0123 08:35:58.727774 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlm5n\" (UniqueName: \"kubernetes.io/projected/424fb122-47dd-4e58-9711-ee6be12a7040-kube-api-access-qlm5n\") pod \"redhat-operators-fnsmx\" (UID: \"424fb122-47dd-4e58-9711-ee6be12a7040\") " pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:35:58 crc kubenswrapper[4860]: I0123 08:35:58.736662 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "2d9e84b4-2516-4ec0-999b-523183e8b2f7" (UID: "2d9e84b4-2516-4ec0-999b-523183e8b2f7"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:35:58 crc kubenswrapper[4860]: I0123 08:35:58.774604 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:35:58 crc kubenswrapper[4860]: I0123 08:35:58.795955 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2d9e84b4-2516-4ec0-999b-523183e8b2f7-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 08:35:59 crc kubenswrapper[4860]: I0123 08:35:59.025119 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fnsmx"] Jan 23 08:35:59 crc kubenswrapper[4860]: I0123 08:35:59.810817 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnsmx" event={"ID":"424fb122-47dd-4e58-9711-ee6be12a7040","Type":"ContainerStarted","Data":"fd294c213214f62cbbf18c7601b04fdf7560e7d63f7ce83e79fbcc78a736069b"} Jan 23 08:35:59 crc kubenswrapper[4860]: I0123 08:35:59.814218 4860 generic.go:334] "Generic (PLEG): container finished" podID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerID="0c5e138e24069ddd965b8c6e8f41ef692b2a1185591867905ae8eb8f0de42c68" exitCode=0 Jan 23 08:35:59 crc kubenswrapper[4860]: I0123 08:35:59.814253 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerDied","Data":"0c5e138e24069ddd965b8c6e8f41ef692b2a1185591867905ae8eb8f0de42c68"} Jan 23 08:35:59 crc kubenswrapper[4860]: I0123 08:35:59.814280 4860 scope.go:117] "RemoveContainer" containerID="55ddb61fc36e2367c62e594d1b3c5e8526e9c5309c997e078accc938d2704751" Jan 23 08:36:00 crc kubenswrapper[4860]: I0123 08:36:00.821477 4860 generic.go:334] "Generic (PLEG): container finished" podID="424fb122-47dd-4e58-9711-ee6be12a7040" containerID="866e32755d3381cc932c84cb743e34b7421835bc98217049a4aed2e77b7b9585" exitCode=0 Jan 23 08:36:00 crc kubenswrapper[4860]: I0123 08:36:00.821716 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnsmx" event={"ID":"424fb122-47dd-4e58-9711-ee6be12a7040","Type":"ContainerDied","Data":"866e32755d3381cc932c84cb743e34b7421835bc98217049a4aed2e77b7b9585"} Jan 23 08:36:00 crc kubenswrapper[4860]: I0123 08:36:00.823386 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:36:00 crc kubenswrapper[4860]: I0123 08:36:00.825545 4860 generic.go:334] "Generic (PLEG): container finished" podID="8bf6197e-42ed-4f0b-8a07-93c32813ca90" containerID="c0c39194bce3f9d83160bd388232e2fe2850f25198d8aea164f0f02b568823f6" exitCode=0 Jan 23 08:36:00 crc kubenswrapper[4860]: I0123 08:36:00.825657 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdc2f" event={"ID":"8bf6197e-42ed-4f0b-8a07-93c32813ca90","Type":"ContainerDied","Data":"c0c39194bce3f9d83160bd388232e2fe2850f25198d8aea164f0f02b568823f6"} Jan 23 08:36:00 crc kubenswrapper[4860]: I0123 08:36:00.829838 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerStarted","Data":"cfb686e44ef0e61ba024387441d8514e0c284409a4a0bfcf8b79aaad27b5ee16"} Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.777956 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.779653 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.781196 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.781453 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.781500 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-2dlqm" Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.784325 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.795332 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.837596 4860 generic.go:334] "Generic (PLEG): container finished" podID="8bf6197e-42ed-4f0b-8a07-93c32813ca90" containerID="eeb30f358e585a4b6079b4856c478d9fff89091e0eec69dcc869308b0f07ca9a" exitCode=0 Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.837669 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdc2f" event={"ID":"8bf6197e-42ed-4f0b-8a07-93c32813ca90","Type":"ContainerDied","Data":"eeb30f358e585a4b6079b4856c478d9fff89091e0eec69dcc869308b0f07ca9a"} Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.841566 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnsmx" event={"ID":"424fb122-47dd-4e58-9711-ee6be12a7040","Type":"ContainerStarted","Data":"390977a071fd6d9dfc003662c2569d4043583f32e4cd782bb47bb00a62007312"} Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.933179 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/de1f4a8f-f81e-41d5-9859-9504a74eac0d-builder-dockercfg-2dlqm-pull\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.933259 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/de1f4a8f-f81e-41d5-9859-9504a74eac0d-builder-dockercfg-2dlqm-push\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.933350 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de1f4a8f-f81e-41d5-9859-9504a74eac0d-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.933442 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.933504 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.933523 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26rpc\" (UniqueName: \"kubernetes.io/projected/de1f4a8f-f81e-41d5-9859-9504a74eac0d-kube-api-access-26rpc\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.933547 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.933602 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.933656 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.933698 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/de1f4a8f-f81e-41d5-9859-9504a74eac0d-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.933779 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:01 crc kubenswrapper[4860]: I0123 08:36:01.933821 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.035387 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.035470 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.035503 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/de1f4a8f-f81e-41d5-9859-9504a74eac0d-builder-dockercfg-2dlqm-pull\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.035545 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/de1f4a8f-f81e-41d5-9859-9504a74eac0d-builder-dockercfg-2dlqm-push\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.035573 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de1f4a8f-f81e-41d5-9859-9504a74eac0d-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.035595 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.035663 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26rpc\" (UniqueName: \"kubernetes.io/projected/de1f4a8f-f81e-41d5-9859-9504a74eac0d-kube-api-access-26rpc\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.035686 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.035715 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.035751 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.035754 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de1f4a8f-f81e-41d5-9859-9504a74eac0d-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.035784 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.035906 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/de1f4a8f-f81e-41d5-9859-9504a74eac0d-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.035989 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/de1f4a8f-f81e-41d5-9859-9504a74eac0d-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.036296 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.036595 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.036938 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.037388 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.038325 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.038387 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.038511 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.042238 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/de1f4a8f-f81e-41d5-9859-9504a74eac0d-builder-dockercfg-2dlqm-push\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.047527 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/de1f4a8f-f81e-41d5-9859-9504a74eac0d-builder-dockercfg-2dlqm-pull\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.057684 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26rpc\" (UniqueName: \"kubernetes.io/projected/de1f4a8f-f81e-41d5-9859-9504a74eac0d-kube-api-access-26rpc\") pod \"sg-bridge-1-build\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.098311 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.497855 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 23 08:36:02 crc kubenswrapper[4860]: W0123 08:36:02.503038 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde1f4a8f_f81e_41d5_9859_9504a74eac0d.slice/crio-9d86be0f31612966050a4b902581caa2805537b69989cd762962ff8d2382f7bd WatchSource:0}: Error finding container 9d86be0f31612966050a4b902581caa2805537b69989cd762962ff8d2382f7bd: Status 404 returned error can't find the container with id 9d86be0f31612966050a4b902581caa2805537b69989cd762962ff8d2382f7bd Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.849839 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdc2f" event={"ID":"8bf6197e-42ed-4f0b-8a07-93c32813ca90","Type":"ContainerStarted","Data":"ef9cdef444246189a517dba4b54d8726764fca5cac89ef7f41e8da3bfd0750c2"} Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.851004 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"de1f4a8f-f81e-41d5-9859-9504a74eac0d","Type":"ContainerStarted","Data":"db07121b8e2abdbfcfd35ccced31778e53ed5fcad4ff362b2abfef65cbaf597a"} Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.851054 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"de1f4a8f-f81e-41d5-9859-9504a74eac0d","Type":"ContainerStarted","Data":"9d86be0f31612966050a4b902581caa2805537b69989cd762962ff8d2382f7bd"} Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.852708 4860 generic.go:334] "Generic (PLEG): container finished" podID="424fb122-47dd-4e58-9711-ee6be12a7040" containerID="390977a071fd6d9dfc003662c2569d4043583f32e4cd782bb47bb00a62007312" exitCode=0 Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.852734 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnsmx" event={"ID":"424fb122-47dd-4e58-9711-ee6be12a7040","Type":"ContainerDied","Data":"390977a071fd6d9dfc003662c2569d4043583f32e4cd782bb47bb00a62007312"} Jan 23 08:36:02 crc kubenswrapper[4860]: I0123 08:36:02.940426 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wdc2f" podStartSLOduration=5.478298527 podStartE2EDuration="6.940405042s" podCreationTimestamp="2026-01-23 08:35:56 +0000 UTC" firstStartedPulling="2026-01-23 08:36:00.828129462 +0000 UTC m=+1267.456179657" lastFinishedPulling="2026-01-23 08:36:02.290235987 +0000 UTC m=+1268.918286172" observedRunningTime="2026-01-23 08:36:02.902757456 +0000 UTC m=+1269.530807641" watchObservedRunningTime="2026-01-23 08:36:02.940405042 +0000 UTC m=+1269.568455227" Jan 23 08:36:03 crc kubenswrapper[4860]: I0123 08:36:03.862216 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnsmx" event={"ID":"424fb122-47dd-4e58-9711-ee6be12a7040","Type":"ContainerStarted","Data":"105520af2eb87ca02d2d05b13586c928067b4312fc1b8dd23fc13d4b3a4bc50d"} Jan 23 08:36:03 crc kubenswrapper[4860]: I0123 08:36:03.864273 4860 generic.go:334] "Generic (PLEG): container finished" podID="de1f4a8f-f81e-41d5-9859-9504a74eac0d" containerID="db07121b8e2abdbfcfd35ccced31778e53ed5fcad4ff362b2abfef65cbaf597a" exitCode=0 Jan 23 08:36:03 crc kubenswrapper[4860]: I0123 08:36:03.864425 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"de1f4a8f-f81e-41d5-9859-9504a74eac0d","Type":"ContainerDied","Data":"db07121b8e2abdbfcfd35ccced31778e53ed5fcad4ff362b2abfef65cbaf597a"} Jan 23 08:36:03 crc kubenswrapper[4860]: I0123 08:36:03.878491 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fnsmx" podStartSLOduration=3.04796257 podStartE2EDuration="5.87847345s" podCreationTimestamp="2026-01-23 08:35:58 +0000 UTC" firstStartedPulling="2026-01-23 08:36:00.823188772 +0000 UTC m=+1267.451238957" lastFinishedPulling="2026-01-23 08:36:03.653699662 +0000 UTC m=+1270.281749837" observedRunningTime="2026-01-23 08:36:03.877725992 +0000 UTC m=+1270.505776177" watchObservedRunningTime="2026-01-23 08:36:03.87847345 +0000 UTC m=+1270.506523655" Jan 23 08:36:04 crc kubenswrapper[4860]: I0123 08:36:04.872740 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"de1f4a8f-f81e-41d5-9859-9504a74eac0d","Type":"ContainerStarted","Data":"f1f4c32a6e3590a3b677f793cc330b3cd78f31c1acc6df5ac41f66db1ccae968"} Jan 23 08:36:04 crc kubenswrapper[4860]: I0123 08:36:04.893572 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.893553691 podStartE2EDuration="3.893553691s" podCreationTimestamp="2026-01-23 08:36:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:36:04.891364008 +0000 UTC m=+1271.519414193" watchObservedRunningTime="2026-01-23 08:36:04.893553691 +0000 UTC m=+1271.521603876" Jan 23 08:36:06 crc kubenswrapper[4860]: I0123 08:36:06.603481 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:36:06 crc kubenswrapper[4860]: I0123 08:36:06.603561 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:36:06 crc kubenswrapper[4860]: I0123 08:36:06.656376 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:36:08 crc kubenswrapper[4860]: I0123 08:36:08.776175 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:36:08 crc kubenswrapper[4860]: I0123 08:36:08.776501 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:36:09 crc kubenswrapper[4860]: I0123 08:36:09.815543 4860 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fnsmx" podUID="424fb122-47dd-4e58-9711-ee6be12a7040" containerName="registry-server" probeResult="failure" output=< Jan 23 08:36:09 crc kubenswrapper[4860]: timeout: failed to connect service ":50051" within 1s Jan 23 08:36:09 crc kubenswrapper[4860]: > Jan 23 08:36:12 crc kubenswrapper[4860]: I0123 08:36:12.062373 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 23 08:36:12 crc kubenswrapper[4860]: I0123 08:36:12.062842 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="de1f4a8f-f81e-41d5-9859-9504a74eac0d" containerName="docker-build" containerID="cri-o://f1f4c32a6e3590a3b677f793cc330b3cd78f31c1acc6df5ac41f66db1ccae968" gracePeriod=30 Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.155928 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.157428 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.159618 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.160085 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.160533 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.185627 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.304590 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.304652 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.304685 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pltxg\" (UniqueName: \"kubernetes.io/projected/74245f59-3cb4-4dd3-a6a3-30e7f449f241-kube-api-access-pltxg\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.304709 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/74245f59-3cb4-4dd3-a6a3-30e7f449f241-builder-dockercfg-2dlqm-push\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.304731 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/74245f59-3cb4-4dd3-a6a3-30e7f449f241-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.304759 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/74245f59-3cb4-4dd3-a6a3-30e7f449f241-builder-dockercfg-2dlqm-pull\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.304864 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.304923 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.304957 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.304985 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.305307 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.305523 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/74245f59-3cb4-4dd3-a6a3-30e7f449f241-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.408010 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/74245f59-3cb4-4dd3-a6a3-30e7f449f241-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.408142 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/74245f59-3cb4-4dd3-a6a3-30e7f449f241-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.408209 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.408261 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.408332 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pltxg\" (UniqueName: \"kubernetes.io/projected/74245f59-3cb4-4dd3-a6a3-30e7f449f241-kube-api-access-pltxg\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.408381 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/74245f59-3cb4-4dd3-a6a3-30e7f449f241-builder-dockercfg-2dlqm-push\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.408438 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/74245f59-3cb4-4dd3-a6a3-30e7f449f241-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.408503 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/74245f59-3cb4-4dd3-a6a3-30e7f449f241-builder-dockercfg-2dlqm-pull\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.408544 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.408604 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.408640 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.408691 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.408739 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/74245f59-3cb4-4dd3-a6a3-30e7f449f241-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.408785 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.409905 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.410927 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.409962 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.410610 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.410812 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.410633 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.411431 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.415530 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/74245f59-3cb4-4dd3-a6a3-30e7f449f241-builder-dockercfg-2dlqm-pull\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.416786 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/74245f59-3cb4-4dd3-a6a3-30e7f449f241-builder-dockercfg-2dlqm-push\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.425258 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pltxg\" (UniqueName: \"kubernetes.io/projected/74245f59-3cb4-4dd3-a6a3-30e7f449f241-kube-api-access-pltxg\") pod \"sg-bridge-2-build\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.478249 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.917548 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 23 08:36:14 crc kubenswrapper[4860]: W0123 08:36:14.925527 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74245f59_3cb4_4dd3_a6a3_30e7f449f241.slice/crio-f59dc37bee6d66358bebdc14cc20f97cabe6c0692ead3cdc5fff42c0840b7cc8 WatchSource:0}: Error finding container f59dc37bee6d66358bebdc14cc20f97cabe6c0692ead3cdc5fff42c0840b7cc8: Status 404 returned error can't find the container with id f59dc37bee6d66358bebdc14cc20f97cabe6c0692ead3cdc5fff42c0840b7cc8 Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.939586 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_de1f4a8f-f81e-41d5-9859-9504a74eac0d/docker-build/0.log" Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.940052 4860 generic.go:334] "Generic (PLEG): container finished" podID="de1f4a8f-f81e-41d5-9859-9504a74eac0d" containerID="f1f4c32a6e3590a3b677f793cc330b3cd78f31c1acc6df5ac41f66db1ccae968" exitCode=1 Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.940123 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"de1f4a8f-f81e-41d5-9859-9504a74eac0d","Type":"ContainerDied","Data":"f1f4c32a6e3590a3b677f793cc330b3cd78f31c1acc6df5ac41f66db1ccae968"} Jan 23 08:36:14 crc kubenswrapper[4860]: I0123 08:36:14.941324 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"74245f59-3cb4-4dd3-a6a3-30e7f449f241","Type":"ContainerStarted","Data":"f59dc37bee6d66358bebdc14cc20f97cabe6c0692ead3cdc5fff42c0840b7cc8"} Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.157093 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_de1f4a8f-f81e-41d5-9859-9504a74eac0d/docker-build/0.log" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.157501 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.318349 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-container-storage-run\") pod \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.318685 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26rpc\" (UniqueName: \"kubernetes.io/projected/de1f4a8f-f81e-41d5-9859-9504a74eac0d-kube-api-access-26rpc\") pod \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.318722 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-blob-cache\") pod \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.318744 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-system-configs\") pod \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.318976 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/de1f4a8f-f81e-41d5-9859-9504a74eac0d-builder-dockercfg-2dlqm-pull\") pod \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.319122 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "de1f4a8f-f81e-41d5-9859-9504a74eac0d" (UID: "de1f4a8f-f81e-41d5-9859-9504a74eac0d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.319285 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "de1f4a8f-f81e-41d5-9859-9504a74eac0d" (UID: "de1f4a8f-f81e-41d5-9859-9504a74eac0d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.320087 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/de1f4a8f-f81e-41d5-9859-9504a74eac0d-buildcachedir\") pod \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.320201 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/de1f4a8f-f81e-41d5-9859-9504a74eac0d-builder-dockercfg-2dlqm-push\") pod \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.320275 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1f4a8f-f81e-41d5-9859-9504a74eac0d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "de1f4a8f-f81e-41d5-9859-9504a74eac0d" (UID: "de1f4a8f-f81e-41d5-9859-9504a74eac0d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.320288 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "de1f4a8f-f81e-41d5-9859-9504a74eac0d" (UID: "de1f4a8f-f81e-41d5-9859-9504a74eac0d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.320528 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de1f4a8f-f81e-41d5-9859-9504a74eac0d-node-pullsecrets\") pod \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.320714 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-ca-bundles\") pod \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.320754 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1f4a8f-f81e-41d5-9859-9504a74eac0d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "de1f4a8f-f81e-41d5-9859-9504a74eac0d" (UID: "de1f4a8f-f81e-41d5-9859-9504a74eac0d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.320811 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-container-storage-root\") pod \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.320844 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-proxy-ca-bundles\") pod \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.320865 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-buildworkdir\") pod \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\" (UID: \"de1f4a8f-f81e-41d5-9859-9504a74eac0d\") " Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.321259 4860 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.321281 4860 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/de1f4a8f-f81e-41d5-9859-9504a74eac0d-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.321292 4860 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de1f4a8f-f81e-41d5-9859-9504a74eac0d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.321303 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.321313 4860 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.321524 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "de1f4a8f-f81e-41d5-9859-9504a74eac0d" (UID: "de1f4a8f-f81e-41d5-9859-9504a74eac0d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.321618 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "de1f4a8f-f81e-41d5-9859-9504a74eac0d" (UID: "de1f4a8f-f81e-41d5-9859-9504a74eac0d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.322124 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "de1f4a8f-f81e-41d5-9859-9504a74eac0d" (UID: "de1f4a8f-f81e-41d5-9859-9504a74eac0d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.322784 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "de1f4a8f-f81e-41d5-9859-9504a74eac0d" (UID: "de1f4a8f-f81e-41d5-9859-9504a74eac0d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.323758 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1f4a8f-f81e-41d5-9859-9504a74eac0d-builder-dockercfg-2dlqm-push" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-push") pod "de1f4a8f-f81e-41d5-9859-9504a74eac0d" (UID: "de1f4a8f-f81e-41d5-9859-9504a74eac0d"). InnerVolumeSpecName "builder-dockercfg-2dlqm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.324376 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1f4a8f-f81e-41d5-9859-9504a74eac0d-kube-api-access-26rpc" (OuterVolumeSpecName: "kube-api-access-26rpc") pod "de1f4a8f-f81e-41d5-9859-9504a74eac0d" (UID: "de1f4a8f-f81e-41d5-9859-9504a74eac0d"). InnerVolumeSpecName "kube-api-access-26rpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.324772 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1f4a8f-f81e-41d5-9859-9504a74eac0d-builder-dockercfg-2dlqm-pull" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-pull") pod "de1f4a8f-f81e-41d5-9859-9504a74eac0d" (UID: "de1f4a8f-f81e-41d5-9859-9504a74eac0d"). InnerVolumeSpecName "builder-dockercfg-2dlqm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.422273 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.422304 4860 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.422315 4860 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/de1f4a8f-f81e-41d5-9859-9504a74eac0d-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.422325 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26rpc\" (UniqueName: \"kubernetes.io/projected/de1f4a8f-f81e-41d5-9859-9504a74eac0d-kube-api-access-26rpc\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.422334 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/de1f4a8f-f81e-41d5-9859-9504a74eac0d-builder-dockercfg-2dlqm-pull\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.422343 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/de1f4a8f-f81e-41d5-9859-9504a74eac0d-builder-dockercfg-2dlqm-push\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.422353 4860 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1f4a8f-f81e-41d5-9859-9504a74eac0d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.950851 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_de1f4a8f-f81e-41d5-9859-9504a74eac0d/docker-build/0.log" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.951367 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"de1f4a8f-f81e-41d5-9859-9504a74eac0d","Type":"ContainerDied","Data":"9d86be0f31612966050a4b902581caa2805537b69989cd762962ff8d2382f7bd"} Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.951432 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.951473 4860 scope.go:117] "RemoveContainer" containerID="f1f4c32a6e3590a3b677f793cc330b3cd78f31c1acc6df5ac41f66db1ccae968" Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.954394 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"74245f59-3cb4-4dd3-a6a3-30e7f449f241","Type":"ContainerStarted","Data":"7c773209e293f7d8235a51c42d45e562d9d942c52b0a7de1585a20268259bbb8"} Jan 23 08:36:15 crc kubenswrapper[4860]: I0123 08:36:15.975783 4860 scope.go:117] "RemoveContainer" containerID="db07121b8e2abdbfcfd35ccced31778e53ed5fcad4ff362b2abfef65cbaf597a" Jan 23 08:36:16 crc kubenswrapper[4860]: I0123 08:36:15.999618 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 23 08:36:16 crc kubenswrapper[4860]: I0123 08:36:16.010514 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 23 08:36:16 crc kubenswrapper[4860]: I0123 08:36:16.643649 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:36:16 crc kubenswrapper[4860]: I0123 08:36:16.680144 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wdc2f"] Jan 23 08:36:16 crc kubenswrapper[4860]: I0123 08:36:16.962681 4860 generic.go:334] "Generic (PLEG): container finished" podID="74245f59-3cb4-4dd3-a6a3-30e7f449f241" containerID="7c773209e293f7d8235a51c42d45e562d9d942c52b0a7de1585a20268259bbb8" exitCode=0 Jan 23 08:36:16 crc kubenswrapper[4860]: I0123 08:36:16.963109 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wdc2f" podUID="8bf6197e-42ed-4f0b-8a07-93c32813ca90" containerName="registry-server" containerID="cri-o://ef9cdef444246189a517dba4b54d8726764fca5cac89ef7f41e8da3bfd0750c2" gracePeriod=2 Jan 23 08:36:16 crc kubenswrapper[4860]: I0123 08:36:16.962822 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"74245f59-3cb4-4dd3-a6a3-30e7f449f241","Type":"ContainerDied","Data":"7c773209e293f7d8235a51c42d45e562d9d942c52b0a7de1585a20268259bbb8"} Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.405607 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.548289 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf6197e-42ed-4f0b-8a07-93c32813ca90-utilities\") pod \"8bf6197e-42ed-4f0b-8a07-93c32813ca90\" (UID: \"8bf6197e-42ed-4f0b-8a07-93c32813ca90\") " Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.549158 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bf6197e-42ed-4f0b-8a07-93c32813ca90-utilities" (OuterVolumeSpecName: "utilities") pod "8bf6197e-42ed-4f0b-8a07-93c32813ca90" (UID: "8bf6197e-42ed-4f0b-8a07-93c32813ca90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.549224 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp2vp\" (UniqueName: \"kubernetes.io/projected/8bf6197e-42ed-4f0b-8a07-93c32813ca90-kube-api-access-kp2vp\") pod \"8bf6197e-42ed-4f0b-8a07-93c32813ca90\" (UID: \"8bf6197e-42ed-4f0b-8a07-93c32813ca90\") " Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.550187 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf6197e-42ed-4f0b-8a07-93c32813ca90-catalog-content\") pod \"8bf6197e-42ed-4f0b-8a07-93c32813ca90\" (UID: \"8bf6197e-42ed-4f0b-8a07-93c32813ca90\") " Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.550572 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf6197e-42ed-4f0b-8a07-93c32813ca90-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.554602 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf6197e-42ed-4f0b-8a07-93c32813ca90-kube-api-access-kp2vp" (OuterVolumeSpecName: "kube-api-access-kp2vp") pod "8bf6197e-42ed-4f0b-8a07-93c32813ca90" (UID: "8bf6197e-42ed-4f0b-8a07-93c32813ca90"). InnerVolumeSpecName "kube-api-access-kp2vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.601647 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bf6197e-42ed-4f0b-8a07-93c32813ca90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bf6197e-42ed-4f0b-8a07-93c32813ca90" (UID: "8bf6197e-42ed-4f0b-8a07-93c32813ca90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.651219 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf6197e-42ed-4f0b-8a07-93c32813ca90-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.651246 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp2vp\" (UniqueName: \"kubernetes.io/projected/8bf6197e-42ed-4f0b-8a07-93c32813ca90-kube-api-access-kp2vp\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.664802 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1f4a8f-f81e-41d5-9859-9504a74eac0d" path="/var/lib/kubelet/pods/de1f4a8f-f81e-41d5-9859-9504a74eac0d/volumes" Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.971688 4860 generic.go:334] "Generic (PLEG): container finished" podID="74245f59-3cb4-4dd3-a6a3-30e7f449f241" containerID="7a2b393cf4e5583c0b6868b1a0e492b00639a1f4528fb244c2a6363cdc27a91c" exitCode=0 Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.971782 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"74245f59-3cb4-4dd3-a6a3-30e7f449f241","Type":"ContainerDied","Data":"7a2b393cf4e5583c0b6868b1a0e492b00639a1f4528fb244c2a6363cdc27a91c"} Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.975396 4860 generic.go:334] "Generic (PLEG): container finished" podID="8bf6197e-42ed-4f0b-8a07-93c32813ca90" containerID="ef9cdef444246189a517dba4b54d8726764fca5cac89ef7f41e8da3bfd0750c2" exitCode=0 Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.975454 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdc2f" event={"ID":"8bf6197e-42ed-4f0b-8a07-93c32813ca90","Type":"ContainerDied","Data":"ef9cdef444246189a517dba4b54d8726764fca5cac89ef7f41e8da3bfd0750c2"} Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.975495 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wdc2f" event={"ID":"8bf6197e-42ed-4f0b-8a07-93c32813ca90","Type":"ContainerDied","Data":"542a1689e9ef8d9ac299a3077fc401730ed2f8e522f676570600ad736f63a042"} Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.975520 4860 scope.go:117] "RemoveContainer" containerID="ef9cdef444246189a517dba4b54d8726764fca5cac89ef7f41e8da3bfd0750c2" Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.975523 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wdc2f" Jan 23 08:36:17 crc kubenswrapper[4860]: I0123 08:36:17.996553 4860 scope.go:117] "RemoveContainer" containerID="eeb30f358e585a4b6079b4856c478d9fff89091e0eec69dcc869308b0f07ca9a" Jan 23 08:36:18 crc kubenswrapper[4860]: I0123 08:36:18.008919 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_74245f59-3cb4-4dd3-a6a3-30e7f449f241/manage-dockerfile/0.log" Jan 23 08:36:18 crc kubenswrapper[4860]: I0123 08:36:18.020533 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wdc2f"] Jan 23 08:36:18 crc kubenswrapper[4860]: I0123 08:36:18.025964 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wdc2f"] Jan 23 08:36:18 crc kubenswrapper[4860]: I0123 08:36:18.048721 4860 scope.go:117] "RemoveContainer" containerID="c0c39194bce3f9d83160bd388232e2fe2850f25198d8aea164f0f02b568823f6" Jan 23 08:36:18 crc kubenswrapper[4860]: I0123 08:36:18.064865 4860 scope.go:117] "RemoveContainer" containerID="ef9cdef444246189a517dba4b54d8726764fca5cac89ef7f41e8da3bfd0750c2" Jan 23 08:36:18 crc kubenswrapper[4860]: E0123 08:36:18.068411 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef9cdef444246189a517dba4b54d8726764fca5cac89ef7f41e8da3bfd0750c2\": container with ID starting with ef9cdef444246189a517dba4b54d8726764fca5cac89ef7f41e8da3bfd0750c2 not found: ID does not exist" containerID="ef9cdef444246189a517dba4b54d8726764fca5cac89ef7f41e8da3bfd0750c2" Jan 23 08:36:18 crc kubenswrapper[4860]: I0123 08:36:18.068461 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef9cdef444246189a517dba4b54d8726764fca5cac89ef7f41e8da3bfd0750c2"} err="failed to get container status \"ef9cdef444246189a517dba4b54d8726764fca5cac89ef7f41e8da3bfd0750c2\": rpc error: code = NotFound desc = could not find container \"ef9cdef444246189a517dba4b54d8726764fca5cac89ef7f41e8da3bfd0750c2\": container with ID starting with ef9cdef444246189a517dba4b54d8726764fca5cac89ef7f41e8da3bfd0750c2 not found: ID does not exist" Jan 23 08:36:18 crc kubenswrapper[4860]: I0123 08:36:18.068490 4860 scope.go:117] "RemoveContainer" containerID="eeb30f358e585a4b6079b4856c478d9fff89091e0eec69dcc869308b0f07ca9a" Jan 23 08:36:18 crc kubenswrapper[4860]: E0123 08:36:18.072681 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeb30f358e585a4b6079b4856c478d9fff89091e0eec69dcc869308b0f07ca9a\": container with ID starting with eeb30f358e585a4b6079b4856c478d9fff89091e0eec69dcc869308b0f07ca9a not found: ID does not exist" containerID="eeb30f358e585a4b6079b4856c478d9fff89091e0eec69dcc869308b0f07ca9a" Jan 23 08:36:18 crc kubenswrapper[4860]: I0123 08:36:18.072726 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeb30f358e585a4b6079b4856c478d9fff89091e0eec69dcc869308b0f07ca9a"} err="failed to get container status \"eeb30f358e585a4b6079b4856c478d9fff89091e0eec69dcc869308b0f07ca9a\": rpc error: code = NotFound desc = could not find container \"eeb30f358e585a4b6079b4856c478d9fff89091e0eec69dcc869308b0f07ca9a\": container with ID starting with eeb30f358e585a4b6079b4856c478d9fff89091e0eec69dcc869308b0f07ca9a not found: ID does not exist" Jan 23 08:36:18 crc kubenswrapper[4860]: I0123 08:36:18.072750 4860 scope.go:117] "RemoveContainer" containerID="c0c39194bce3f9d83160bd388232e2fe2850f25198d8aea164f0f02b568823f6" Jan 23 08:36:18 crc kubenswrapper[4860]: E0123 08:36:18.073213 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0c39194bce3f9d83160bd388232e2fe2850f25198d8aea164f0f02b568823f6\": container with ID starting with c0c39194bce3f9d83160bd388232e2fe2850f25198d8aea164f0f02b568823f6 not found: ID does not exist" containerID="c0c39194bce3f9d83160bd388232e2fe2850f25198d8aea164f0f02b568823f6" Jan 23 08:36:18 crc kubenswrapper[4860]: I0123 08:36:18.073246 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0c39194bce3f9d83160bd388232e2fe2850f25198d8aea164f0f02b568823f6"} err="failed to get container status \"c0c39194bce3f9d83160bd388232e2fe2850f25198d8aea164f0f02b568823f6\": rpc error: code = NotFound desc = could not find container \"c0c39194bce3f9d83160bd388232e2fe2850f25198d8aea164f0f02b568823f6\": container with ID starting with c0c39194bce3f9d83160bd388232e2fe2850f25198d8aea164f0f02b568823f6 not found: ID does not exist" Jan 23 08:36:18 crc kubenswrapper[4860]: I0123 08:36:18.809796 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:36:18 crc kubenswrapper[4860]: I0123 08:36:18.845256 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:36:18 crc kubenswrapper[4860]: I0123 08:36:18.984625 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"74245f59-3cb4-4dd3-a6a3-30e7f449f241","Type":"ContainerStarted","Data":"a5289b7a147dbfa5325ccf088b09f25c3d19ec4e7222978156b8c3cc4632198e"} Jan 23 08:36:19 crc kubenswrapper[4860]: I0123 08:36:19.015682 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.015661622 podStartE2EDuration="5.015661622s" podCreationTimestamp="2026-01-23 08:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:36:19.010887877 +0000 UTC m=+1285.638938082" watchObservedRunningTime="2026-01-23 08:36:19.015661622 +0000 UTC m=+1285.643711807" Jan 23 08:36:19 crc kubenswrapper[4860]: I0123 08:36:19.666582 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bf6197e-42ed-4f0b-8a07-93c32813ca90" path="/var/lib/kubelet/pods/8bf6197e-42ed-4f0b-8a07-93c32813ca90/volumes" Jan 23 08:36:20 crc kubenswrapper[4860]: I0123 08:36:20.274415 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fnsmx"] Jan 23 08:36:20 crc kubenswrapper[4860]: I0123 08:36:20.275001 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fnsmx" podUID="424fb122-47dd-4e58-9711-ee6be12a7040" containerName="registry-server" containerID="cri-o://105520af2eb87ca02d2d05b13586c928067b4312fc1b8dd23fc13d4b3a4bc50d" gracePeriod=2 Jan 23 08:36:20 crc kubenswrapper[4860]: I0123 08:36:20.640909 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:36:20 crc kubenswrapper[4860]: I0123 08:36:20.794879 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/424fb122-47dd-4e58-9711-ee6be12a7040-utilities\") pod \"424fb122-47dd-4e58-9711-ee6be12a7040\" (UID: \"424fb122-47dd-4e58-9711-ee6be12a7040\") " Jan 23 08:36:20 crc kubenswrapper[4860]: I0123 08:36:20.795008 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/424fb122-47dd-4e58-9711-ee6be12a7040-catalog-content\") pod \"424fb122-47dd-4e58-9711-ee6be12a7040\" (UID: \"424fb122-47dd-4e58-9711-ee6be12a7040\") " Jan 23 08:36:20 crc kubenswrapper[4860]: I0123 08:36:20.795175 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlm5n\" (UniqueName: \"kubernetes.io/projected/424fb122-47dd-4e58-9711-ee6be12a7040-kube-api-access-qlm5n\") pod \"424fb122-47dd-4e58-9711-ee6be12a7040\" (UID: \"424fb122-47dd-4e58-9711-ee6be12a7040\") " Jan 23 08:36:20 crc kubenswrapper[4860]: I0123 08:36:20.797574 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/424fb122-47dd-4e58-9711-ee6be12a7040-utilities" (OuterVolumeSpecName: "utilities") pod "424fb122-47dd-4e58-9711-ee6be12a7040" (UID: "424fb122-47dd-4e58-9711-ee6be12a7040"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:36:20 crc kubenswrapper[4860]: I0123 08:36:20.800557 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/424fb122-47dd-4e58-9711-ee6be12a7040-kube-api-access-qlm5n" (OuterVolumeSpecName: "kube-api-access-qlm5n") pod "424fb122-47dd-4e58-9711-ee6be12a7040" (UID: "424fb122-47dd-4e58-9711-ee6be12a7040"). InnerVolumeSpecName "kube-api-access-qlm5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:36:20 crc kubenswrapper[4860]: I0123 08:36:20.898160 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/424fb122-47dd-4e58-9711-ee6be12a7040-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:20 crc kubenswrapper[4860]: I0123 08:36:20.898198 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlm5n\" (UniqueName: \"kubernetes.io/projected/424fb122-47dd-4e58-9711-ee6be12a7040-kube-api-access-qlm5n\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:20 crc kubenswrapper[4860]: I0123 08:36:20.918281 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/424fb122-47dd-4e58-9711-ee6be12a7040-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "424fb122-47dd-4e58-9711-ee6be12a7040" (UID: "424fb122-47dd-4e58-9711-ee6be12a7040"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.000442 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/424fb122-47dd-4e58-9711-ee6be12a7040-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.001664 4860 generic.go:334] "Generic (PLEG): container finished" podID="424fb122-47dd-4e58-9711-ee6be12a7040" containerID="105520af2eb87ca02d2d05b13586c928067b4312fc1b8dd23fc13d4b3a4bc50d" exitCode=0 Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.001714 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnsmx" event={"ID":"424fb122-47dd-4e58-9711-ee6be12a7040","Type":"ContainerDied","Data":"105520af2eb87ca02d2d05b13586c928067b4312fc1b8dd23fc13d4b3a4bc50d"} Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.001728 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fnsmx" Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.001764 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnsmx" event={"ID":"424fb122-47dd-4e58-9711-ee6be12a7040","Type":"ContainerDied","Data":"fd294c213214f62cbbf18c7601b04fdf7560e7d63f7ce83e79fbcc78a736069b"} Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.001785 4860 scope.go:117] "RemoveContainer" containerID="105520af2eb87ca02d2d05b13586c928067b4312fc1b8dd23fc13d4b3a4bc50d" Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.023187 4860 scope.go:117] "RemoveContainer" containerID="390977a071fd6d9dfc003662c2569d4043583f32e4cd782bb47bb00a62007312" Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.035275 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fnsmx"] Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.043133 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fnsmx"] Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.058355 4860 scope.go:117] "RemoveContainer" containerID="866e32755d3381cc932c84cb743e34b7421835bc98217049a4aed2e77b7b9585" Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.075172 4860 scope.go:117] "RemoveContainer" containerID="105520af2eb87ca02d2d05b13586c928067b4312fc1b8dd23fc13d4b3a4bc50d" Jan 23 08:36:21 crc kubenswrapper[4860]: E0123 08:36:21.075610 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"105520af2eb87ca02d2d05b13586c928067b4312fc1b8dd23fc13d4b3a4bc50d\": container with ID starting with 105520af2eb87ca02d2d05b13586c928067b4312fc1b8dd23fc13d4b3a4bc50d not found: ID does not exist" containerID="105520af2eb87ca02d2d05b13586c928067b4312fc1b8dd23fc13d4b3a4bc50d" Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.075656 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"105520af2eb87ca02d2d05b13586c928067b4312fc1b8dd23fc13d4b3a4bc50d"} err="failed to get container status \"105520af2eb87ca02d2d05b13586c928067b4312fc1b8dd23fc13d4b3a4bc50d\": rpc error: code = NotFound desc = could not find container \"105520af2eb87ca02d2d05b13586c928067b4312fc1b8dd23fc13d4b3a4bc50d\": container with ID starting with 105520af2eb87ca02d2d05b13586c928067b4312fc1b8dd23fc13d4b3a4bc50d not found: ID does not exist" Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.075682 4860 scope.go:117] "RemoveContainer" containerID="390977a071fd6d9dfc003662c2569d4043583f32e4cd782bb47bb00a62007312" Jan 23 08:36:21 crc kubenswrapper[4860]: E0123 08:36:21.076068 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"390977a071fd6d9dfc003662c2569d4043583f32e4cd782bb47bb00a62007312\": container with ID starting with 390977a071fd6d9dfc003662c2569d4043583f32e4cd782bb47bb00a62007312 not found: ID does not exist" containerID="390977a071fd6d9dfc003662c2569d4043583f32e4cd782bb47bb00a62007312" Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.076108 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"390977a071fd6d9dfc003662c2569d4043583f32e4cd782bb47bb00a62007312"} err="failed to get container status \"390977a071fd6d9dfc003662c2569d4043583f32e4cd782bb47bb00a62007312\": rpc error: code = NotFound desc = could not find container \"390977a071fd6d9dfc003662c2569d4043583f32e4cd782bb47bb00a62007312\": container with ID starting with 390977a071fd6d9dfc003662c2569d4043583f32e4cd782bb47bb00a62007312 not found: ID does not exist" Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.076136 4860 scope.go:117] "RemoveContainer" containerID="866e32755d3381cc932c84cb743e34b7421835bc98217049a4aed2e77b7b9585" Jan 23 08:36:21 crc kubenswrapper[4860]: E0123 08:36:21.076487 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"866e32755d3381cc932c84cb743e34b7421835bc98217049a4aed2e77b7b9585\": container with ID starting with 866e32755d3381cc932c84cb743e34b7421835bc98217049a4aed2e77b7b9585 not found: ID does not exist" containerID="866e32755d3381cc932c84cb743e34b7421835bc98217049a4aed2e77b7b9585" Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.076559 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"866e32755d3381cc932c84cb743e34b7421835bc98217049a4aed2e77b7b9585"} err="failed to get container status \"866e32755d3381cc932c84cb743e34b7421835bc98217049a4aed2e77b7b9585\": rpc error: code = NotFound desc = could not find container \"866e32755d3381cc932c84cb743e34b7421835bc98217049a4aed2e77b7b9585\": container with ID starting with 866e32755d3381cc932c84cb743e34b7421835bc98217049a4aed2e77b7b9585 not found: ID does not exist" Jan 23 08:36:21 crc kubenswrapper[4860]: E0123 08:36:21.124339 4860 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod424fb122_47dd_4e58_9711_ee6be12a7040.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod424fb122_47dd_4e58_9711_ee6be12a7040.slice/crio-fd294c213214f62cbbf18c7601b04fdf7560e7d63f7ce83e79fbcc78a736069b\": RecentStats: unable to find data in memory cache]" Jan 23 08:36:21 crc kubenswrapper[4860]: I0123 08:36:21.665070 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="424fb122-47dd-4e58-9711-ee6be12a7040" path="/var/lib/kubelet/pods/424fb122-47dd-4e58-9711-ee6be12a7040/volumes" Jan 23 08:37:12 crc kubenswrapper[4860]: I0123 08:37:12.322115 4860 generic.go:334] "Generic (PLEG): container finished" podID="74245f59-3cb4-4dd3-a6a3-30e7f449f241" containerID="a5289b7a147dbfa5325ccf088b09f25c3d19ec4e7222978156b8c3cc4632198e" exitCode=0 Jan 23 08:37:12 crc kubenswrapper[4860]: I0123 08:37:12.322190 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"74245f59-3cb4-4dd3-a6a3-30e7f449f241","Type":"ContainerDied","Data":"a5289b7a147dbfa5325ccf088b09f25c3d19ec4e7222978156b8c3cc4632198e"} Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.588074 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.647028 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/74245f59-3cb4-4dd3-a6a3-30e7f449f241-builder-dockercfg-2dlqm-pull\") pod \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.647096 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-buildworkdir\") pod \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.647129 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pltxg\" (UniqueName: \"kubernetes.io/projected/74245f59-3cb4-4dd3-a6a3-30e7f449f241-kube-api-access-pltxg\") pod \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.647155 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/74245f59-3cb4-4dd3-a6a3-30e7f449f241-builder-dockercfg-2dlqm-push\") pod \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.647196 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-ca-bundles\") pod \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.647221 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-proxy-ca-bundles\") pod \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.647252 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/74245f59-3cb4-4dd3-a6a3-30e7f449f241-node-pullsecrets\") pod \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.647281 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-system-configs\") pod \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.647301 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-blob-cache\") pod \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.647342 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-container-storage-run\") pod \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.647397 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/74245f59-3cb4-4dd3-a6a3-30e7f449f241-buildcachedir\") pod \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.647430 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-container-storage-root\") pod \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\" (UID: \"74245f59-3cb4-4dd3-a6a3-30e7f449f241\") " Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.648058 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74245f59-3cb4-4dd3-a6a3-30e7f449f241-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "74245f59-3cb4-4dd3-a6a3-30e7f449f241" (UID: "74245f59-3cb4-4dd3-a6a3-30e7f449f241"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.648422 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "74245f59-3cb4-4dd3-a6a3-30e7f449f241" (UID: "74245f59-3cb4-4dd3-a6a3-30e7f449f241"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.649921 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "74245f59-3cb4-4dd3-a6a3-30e7f449f241" (UID: "74245f59-3cb4-4dd3-a6a3-30e7f449f241"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.650050 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74245f59-3cb4-4dd3-a6a3-30e7f449f241-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "74245f59-3cb4-4dd3-a6a3-30e7f449f241" (UID: "74245f59-3cb4-4dd3-a6a3-30e7f449f241"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.650449 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "74245f59-3cb4-4dd3-a6a3-30e7f449f241" (UID: "74245f59-3cb4-4dd3-a6a3-30e7f449f241"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.650576 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "74245f59-3cb4-4dd3-a6a3-30e7f449f241" (UID: "74245f59-3cb4-4dd3-a6a3-30e7f449f241"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.651147 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "74245f59-3cb4-4dd3-a6a3-30e7f449f241" (UID: "74245f59-3cb4-4dd3-a6a3-30e7f449f241"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.652924 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74245f59-3cb4-4dd3-a6a3-30e7f449f241-builder-dockercfg-2dlqm-push" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-push") pod "74245f59-3cb4-4dd3-a6a3-30e7f449f241" (UID: "74245f59-3cb4-4dd3-a6a3-30e7f449f241"). InnerVolumeSpecName "builder-dockercfg-2dlqm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.652951 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74245f59-3cb4-4dd3-a6a3-30e7f449f241-kube-api-access-pltxg" (OuterVolumeSpecName: "kube-api-access-pltxg") pod "74245f59-3cb4-4dd3-a6a3-30e7f449f241" (UID: "74245f59-3cb4-4dd3-a6a3-30e7f449f241"). InnerVolumeSpecName "kube-api-access-pltxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.652963 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74245f59-3cb4-4dd3-a6a3-30e7f449f241-builder-dockercfg-2dlqm-pull" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-pull") pod "74245f59-3cb4-4dd3-a6a3-30e7f449f241" (UID: "74245f59-3cb4-4dd3-a6a3-30e7f449f241"). InnerVolumeSpecName "builder-dockercfg-2dlqm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.749341 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.749678 4860 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/74245f59-3cb4-4dd3-a6a3-30e7f449f241-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.749688 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/74245f59-3cb4-4dd3-a6a3-30e7f449f241-builder-dockercfg-2dlqm-pull\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.749699 4860 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.749708 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pltxg\" (UniqueName: \"kubernetes.io/projected/74245f59-3cb4-4dd3-a6a3-30e7f449f241-kube-api-access-pltxg\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.749717 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/74245f59-3cb4-4dd3-a6a3-30e7f449f241-builder-dockercfg-2dlqm-push\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.749725 4860 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.749756 4860 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.749765 4860 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/74245f59-3cb4-4dd3-a6a3-30e7f449f241-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.749774 4860 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.762613 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "74245f59-3cb4-4dd3-a6a3-30e7f449f241" (UID: "74245f59-3cb4-4dd3-a6a3-30e7f449f241"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:37:13 crc kubenswrapper[4860]: I0123 08:37:13.850883 4860 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:14 crc kubenswrapper[4860]: I0123 08:37:14.344190 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"74245f59-3cb4-4dd3-a6a3-30e7f449f241","Type":"ContainerDied","Data":"f59dc37bee6d66358bebdc14cc20f97cabe6c0692ead3cdc5fff42c0840b7cc8"} Jan 23 08:37:14 crc kubenswrapper[4860]: I0123 08:37:14.344232 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f59dc37bee6d66358bebdc14cc20f97cabe6c0692ead3cdc5fff42c0840b7cc8" Jan 23 08:37:14 crc kubenswrapper[4860]: I0123 08:37:14.344265 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 23 08:37:14 crc kubenswrapper[4860]: I0123 08:37:14.371101 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "74245f59-3cb4-4dd3-a6a3-30e7f449f241" (UID: "74245f59-3cb4-4dd3-a6a3-30e7f449f241"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:37:14 crc kubenswrapper[4860]: I0123 08:37:14.460520 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/74245f59-3cb4-4dd3-a6a3-30e7f449f241-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.890209 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 23 08:37:18 crc kubenswrapper[4860]: E0123 08:37:18.890661 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="424fb122-47dd-4e58-9711-ee6be12a7040" containerName="extract-content" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.890673 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="424fb122-47dd-4e58-9711-ee6be12a7040" containerName="extract-content" Jan 23 08:37:18 crc kubenswrapper[4860]: E0123 08:37:18.890680 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf6197e-42ed-4f0b-8a07-93c32813ca90" containerName="extract-utilities" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.890687 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf6197e-42ed-4f0b-8a07-93c32813ca90" containerName="extract-utilities" Jan 23 08:37:18 crc kubenswrapper[4860]: E0123 08:37:18.890695 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1f4a8f-f81e-41d5-9859-9504a74eac0d" containerName="manage-dockerfile" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.890702 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1f4a8f-f81e-41d5-9859-9504a74eac0d" containerName="manage-dockerfile" Jan 23 08:37:18 crc kubenswrapper[4860]: E0123 08:37:18.890711 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="424fb122-47dd-4e58-9711-ee6be12a7040" containerName="extract-utilities" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.890716 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="424fb122-47dd-4e58-9711-ee6be12a7040" containerName="extract-utilities" Jan 23 08:37:18 crc kubenswrapper[4860]: E0123 08:37:18.890728 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf6197e-42ed-4f0b-8a07-93c32813ca90" containerName="extract-content" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.890734 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf6197e-42ed-4f0b-8a07-93c32813ca90" containerName="extract-content" Jan 23 08:37:18 crc kubenswrapper[4860]: E0123 08:37:18.890741 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1f4a8f-f81e-41d5-9859-9504a74eac0d" containerName="docker-build" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.890746 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1f4a8f-f81e-41d5-9859-9504a74eac0d" containerName="docker-build" Jan 23 08:37:18 crc kubenswrapper[4860]: E0123 08:37:18.890753 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf6197e-42ed-4f0b-8a07-93c32813ca90" containerName="registry-server" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.890759 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf6197e-42ed-4f0b-8a07-93c32813ca90" containerName="registry-server" Jan 23 08:37:18 crc kubenswrapper[4860]: E0123 08:37:18.890768 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="424fb122-47dd-4e58-9711-ee6be12a7040" containerName="registry-server" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.890774 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="424fb122-47dd-4e58-9711-ee6be12a7040" containerName="registry-server" Jan 23 08:37:18 crc kubenswrapper[4860]: E0123 08:37:18.890781 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74245f59-3cb4-4dd3-a6a3-30e7f449f241" containerName="docker-build" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.890786 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="74245f59-3cb4-4dd3-a6a3-30e7f449f241" containerName="docker-build" Jan 23 08:37:18 crc kubenswrapper[4860]: E0123 08:37:18.890799 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74245f59-3cb4-4dd3-a6a3-30e7f449f241" containerName="manage-dockerfile" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.890804 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="74245f59-3cb4-4dd3-a6a3-30e7f449f241" containerName="manage-dockerfile" Jan 23 08:37:18 crc kubenswrapper[4860]: E0123 08:37:18.890811 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74245f59-3cb4-4dd3-a6a3-30e7f449f241" containerName="git-clone" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.890816 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="74245f59-3cb4-4dd3-a6a3-30e7f449f241" containerName="git-clone" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.890912 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1f4a8f-f81e-41d5-9859-9504a74eac0d" containerName="docker-build" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.890922 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf6197e-42ed-4f0b-8a07-93c32813ca90" containerName="registry-server" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.890929 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="424fb122-47dd-4e58-9711-ee6be12a7040" containerName="registry-server" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.890940 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="74245f59-3cb4-4dd3-a6a3-30e7f449f241" containerName="docker-build" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.891487 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.893492 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-2dlqm" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.893975 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.894210 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.894624 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Jan 23 08:37:18 crc kubenswrapper[4860]: I0123 08:37:18.906723 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.015749 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/2165dc89-623c-4dea-a70f-72e58df50ef6-builder-dockercfg-2dlqm-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.016110 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.016212 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/2165dc89-623c-4dea-a70f-72e58df50ef6-builder-dockercfg-2dlqm-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.016302 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.016395 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.016483 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.016583 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.016660 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.016740 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2165dc89-623c-4dea-a70f-72e58df50ef6-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.016852 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.016970 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2165dc89-623c-4dea-a70f-72e58df50ef6-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.017138 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2rrg\" (UniqueName: \"kubernetes.io/projected/2165dc89-623c-4dea-a70f-72e58df50ef6-kube-api-access-d2rrg\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.118166 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/2165dc89-623c-4dea-a70f-72e58df50ef6-builder-dockercfg-2dlqm-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.118443 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.118551 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/2165dc89-623c-4dea-a70f-72e58df50ef6-builder-dockercfg-2dlqm-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.118630 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.118733 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.118853 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.119004 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.119141 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.119265 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2165dc89-623c-4dea-a70f-72e58df50ef6-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.119377 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.119457 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2165dc89-623c-4dea-a70f-72e58df50ef6-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.119549 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.119321 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2165dc89-623c-4dea-a70f-72e58df50ef6-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.119271 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.118867 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.119529 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.119552 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2165dc89-623c-4dea-a70f-72e58df50ef6-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.119715 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.119060 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.119739 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.119904 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2rrg\" (UniqueName: \"kubernetes.io/projected/2165dc89-623c-4dea-a70f-72e58df50ef6-kube-api-access-d2rrg\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.134709 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/2165dc89-623c-4dea-a70f-72e58df50ef6-builder-dockercfg-2dlqm-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.134743 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/2165dc89-623c-4dea-a70f-72e58df50ef6-builder-dockercfg-2dlqm-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.137302 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2rrg\" (UniqueName: \"kubernetes.io/projected/2165dc89-623c-4dea-a70f-72e58df50ef6-kube-api-access-d2rrg\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.208232 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:19 crc kubenswrapper[4860]: I0123 08:37:19.386416 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 23 08:37:20 crc kubenswrapper[4860]: I0123 08:37:20.378740 4860 generic.go:334] "Generic (PLEG): container finished" podID="2165dc89-623c-4dea-a70f-72e58df50ef6" containerID="d99dfe21b2deb5245af55268e80825cb37bd8d5593676a7571d67d874813fc93" exitCode=0 Jan 23 08:37:20 crc kubenswrapper[4860]: I0123 08:37:20.378806 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"2165dc89-623c-4dea-a70f-72e58df50ef6","Type":"ContainerDied","Data":"d99dfe21b2deb5245af55268e80825cb37bd8d5593676a7571d67d874813fc93"} Jan 23 08:37:20 crc kubenswrapper[4860]: I0123 08:37:20.379078 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"2165dc89-623c-4dea-a70f-72e58df50ef6","Type":"ContainerStarted","Data":"53e07700cd31f28dc68c6271c8be07757d916f8ca18d421ad3c50c31bf3c5156"} Jan 23 08:37:21 crc kubenswrapper[4860]: I0123 08:37:21.385521 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"2165dc89-623c-4dea-a70f-72e58df50ef6","Type":"ContainerStarted","Data":"6ed6ca5830edbd2296bbb4b03f7e8163b8179a80cfc7dc1b04224dafc4626862"} Jan 23 08:37:21 crc kubenswrapper[4860]: I0123 08:37:21.412508 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=3.41248823 podStartE2EDuration="3.41248823s" podCreationTimestamp="2026-01-23 08:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:37:21.407942729 +0000 UTC m=+1348.035992924" watchObservedRunningTime="2026-01-23 08:37:21.41248823 +0000 UTC m=+1348.040538415" Jan 23 08:37:29 crc kubenswrapper[4860]: I0123 08:37:29.560339 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 23 08:37:29 crc kubenswrapper[4860]: I0123 08:37:29.561382 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="2165dc89-623c-4dea-a70f-72e58df50ef6" containerName="docker-build" containerID="cri-o://6ed6ca5830edbd2296bbb4b03f7e8163b8179a80cfc7dc1b04224dafc4626862" gracePeriod=30 Jan 23 08:37:29 crc kubenswrapper[4860]: I0123 08:37:29.913982 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_2165dc89-623c-4dea-a70f-72e58df50ef6/docker-build/0.log" Jan 23 08:37:29 crc kubenswrapper[4860]: I0123 08:37:29.914653 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.065789 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-container-storage-root\") pod \"2165dc89-623c-4dea-a70f-72e58df50ef6\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.067378 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-build-blob-cache\") pod \"2165dc89-623c-4dea-a70f-72e58df50ef6\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.067425 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-proxy-ca-bundles\") pod \"2165dc89-623c-4dea-a70f-72e58df50ef6\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.067457 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-buildworkdir\") pod \"2165dc89-623c-4dea-a70f-72e58df50ef6\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.067540 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2165dc89-623c-4dea-a70f-72e58df50ef6-node-pullsecrets\") pod \"2165dc89-623c-4dea-a70f-72e58df50ef6\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.067575 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/2165dc89-623c-4dea-a70f-72e58df50ef6-builder-dockercfg-2dlqm-pull\") pod \"2165dc89-623c-4dea-a70f-72e58df50ef6\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.067596 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2rrg\" (UniqueName: \"kubernetes.io/projected/2165dc89-623c-4dea-a70f-72e58df50ef6-kube-api-access-d2rrg\") pod \"2165dc89-623c-4dea-a70f-72e58df50ef6\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.067644 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/2165dc89-623c-4dea-a70f-72e58df50ef6-builder-dockercfg-2dlqm-push\") pod \"2165dc89-623c-4dea-a70f-72e58df50ef6\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.067662 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-container-storage-run\") pod \"2165dc89-623c-4dea-a70f-72e58df50ef6\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.067728 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-ca-bundles\") pod \"2165dc89-623c-4dea-a70f-72e58df50ef6\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.067730 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2165dc89-623c-4dea-a70f-72e58df50ef6-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "2165dc89-623c-4dea-a70f-72e58df50ef6" (UID: "2165dc89-623c-4dea-a70f-72e58df50ef6"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.067773 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-system-configs\") pod \"2165dc89-623c-4dea-a70f-72e58df50ef6\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.067976 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2165dc89-623c-4dea-a70f-72e58df50ef6-buildcachedir\") pod \"2165dc89-623c-4dea-a70f-72e58df50ef6\" (UID: \"2165dc89-623c-4dea-a70f-72e58df50ef6\") " Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.068264 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2165dc89-623c-4dea-a70f-72e58df50ef6-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "2165dc89-623c-4dea-a70f-72e58df50ef6" (UID: "2165dc89-623c-4dea-a70f-72e58df50ef6"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.068648 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "2165dc89-623c-4dea-a70f-72e58df50ef6" (UID: "2165dc89-623c-4dea-a70f-72e58df50ef6"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.069239 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "2165dc89-623c-4dea-a70f-72e58df50ef6" (UID: "2165dc89-623c-4dea-a70f-72e58df50ef6"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.069380 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "2165dc89-623c-4dea-a70f-72e58df50ef6" (UID: "2165dc89-623c-4dea-a70f-72e58df50ef6"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.069840 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "2165dc89-623c-4dea-a70f-72e58df50ef6" (UID: "2165dc89-623c-4dea-a70f-72e58df50ef6"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.069888 4860 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.069986 4860 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.069922 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "2165dc89-623c-4dea-a70f-72e58df50ef6" (UID: "2165dc89-623c-4dea-a70f-72e58df50ef6"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.070002 4860 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2165dc89-623c-4dea-a70f-72e58df50ef6-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.070055 4860 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.070070 4860 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2165dc89-623c-4dea-a70f-72e58df50ef6-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.074696 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2165dc89-623c-4dea-a70f-72e58df50ef6-builder-dockercfg-2dlqm-push" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-push") pod "2165dc89-623c-4dea-a70f-72e58df50ef6" (UID: "2165dc89-623c-4dea-a70f-72e58df50ef6"). InnerVolumeSpecName "builder-dockercfg-2dlqm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.074743 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2165dc89-623c-4dea-a70f-72e58df50ef6-builder-dockercfg-2dlqm-pull" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-pull") pod "2165dc89-623c-4dea-a70f-72e58df50ef6" (UID: "2165dc89-623c-4dea-a70f-72e58df50ef6"). InnerVolumeSpecName "builder-dockercfg-2dlqm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.076678 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2165dc89-623c-4dea-a70f-72e58df50ef6-kube-api-access-d2rrg" (OuterVolumeSpecName: "kube-api-access-d2rrg") pod "2165dc89-623c-4dea-a70f-72e58df50ef6" (UID: "2165dc89-623c-4dea-a70f-72e58df50ef6"). InnerVolumeSpecName "kube-api-access-d2rrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.140783 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "2165dc89-623c-4dea-a70f-72e58df50ef6" (UID: "2165dc89-623c-4dea-a70f-72e58df50ef6"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.171692 4860 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.171739 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/2165dc89-623c-4dea-a70f-72e58df50ef6-builder-dockercfg-2dlqm-pull\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.171754 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2rrg\" (UniqueName: \"kubernetes.io/projected/2165dc89-623c-4dea-a70f-72e58df50ef6-kube-api-access-d2rrg\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.171773 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/2165dc89-623c-4dea-a70f-72e58df50ef6-builder-dockercfg-2dlqm-push\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.171786 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.171797 4860 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2165dc89-623c-4dea-a70f-72e58df50ef6-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.432814 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "2165dc89-623c-4dea-a70f-72e58df50ef6" (UID: "2165dc89-623c-4dea-a70f-72e58df50ef6"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.437228 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_2165dc89-623c-4dea-a70f-72e58df50ef6/docker-build/0.log" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.438326 4860 generic.go:334] "Generic (PLEG): container finished" podID="2165dc89-623c-4dea-a70f-72e58df50ef6" containerID="6ed6ca5830edbd2296bbb4b03f7e8163b8179a80cfc7dc1b04224dafc4626862" exitCode=1 Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.438367 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.438370 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"2165dc89-623c-4dea-a70f-72e58df50ef6","Type":"ContainerDied","Data":"6ed6ca5830edbd2296bbb4b03f7e8163b8179a80cfc7dc1b04224dafc4626862"} Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.438480 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"2165dc89-623c-4dea-a70f-72e58df50ef6","Type":"ContainerDied","Data":"53e07700cd31f28dc68c6271c8be07757d916f8ca18d421ad3c50c31bf3c5156"} Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.438506 4860 scope.go:117] "RemoveContainer" containerID="6ed6ca5830edbd2296bbb4b03f7e8163b8179a80cfc7dc1b04224dafc4626862" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.472058 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.475972 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2165dc89-623c-4dea-a70f-72e58df50ef6-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.478522 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.481290 4860 scope.go:117] "RemoveContainer" containerID="d99dfe21b2deb5245af55268e80825cb37bd8d5593676a7571d67d874813fc93" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.501295 4860 scope.go:117] "RemoveContainer" containerID="6ed6ca5830edbd2296bbb4b03f7e8163b8179a80cfc7dc1b04224dafc4626862" Jan 23 08:37:30 crc kubenswrapper[4860]: E0123 08:37:30.501753 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed6ca5830edbd2296bbb4b03f7e8163b8179a80cfc7dc1b04224dafc4626862\": container with ID starting with 6ed6ca5830edbd2296bbb4b03f7e8163b8179a80cfc7dc1b04224dafc4626862 not found: ID does not exist" containerID="6ed6ca5830edbd2296bbb4b03f7e8163b8179a80cfc7dc1b04224dafc4626862" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.501795 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed6ca5830edbd2296bbb4b03f7e8163b8179a80cfc7dc1b04224dafc4626862"} err="failed to get container status \"6ed6ca5830edbd2296bbb4b03f7e8163b8179a80cfc7dc1b04224dafc4626862\": rpc error: code = NotFound desc = could not find container \"6ed6ca5830edbd2296bbb4b03f7e8163b8179a80cfc7dc1b04224dafc4626862\": container with ID starting with 6ed6ca5830edbd2296bbb4b03f7e8163b8179a80cfc7dc1b04224dafc4626862 not found: ID does not exist" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.501818 4860 scope.go:117] "RemoveContainer" containerID="d99dfe21b2deb5245af55268e80825cb37bd8d5593676a7571d67d874813fc93" Jan 23 08:37:30 crc kubenswrapper[4860]: E0123 08:37:30.502136 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d99dfe21b2deb5245af55268e80825cb37bd8d5593676a7571d67d874813fc93\": container with ID starting with d99dfe21b2deb5245af55268e80825cb37bd8d5593676a7571d67d874813fc93 not found: ID does not exist" containerID="d99dfe21b2deb5245af55268e80825cb37bd8d5593676a7571d67d874813fc93" Jan 23 08:37:30 crc kubenswrapper[4860]: I0123 08:37:30.502165 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99dfe21b2deb5245af55268e80825cb37bd8d5593676a7571d67d874813fc93"} err="failed to get container status \"d99dfe21b2deb5245af55268e80825cb37bd8d5593676a7571d67d874813fc93\": rpc error: code = NotFound desc = could not find container \"d99dfe21b2deb5245af55268e80825cb37bd8d5593676a7571d67d874813fc93\": container with ID starting with d99dfe21b2deb5245af55268e80825cb37bd8d5593676a7571d67d874813fc93 not found: ID does not exist" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.203092 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 23 08:37:31 crc kubenswrapper[4860]: E0123 08:37:31.203366 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2165dc89-623c-4dea-a70f-72e58df50ef6" containerName="manage-dockerfile" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.203381 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2165dc89-623c-4dea-a70f-72e58df50ef6" containerName="manage-dockerfile" Jan 23 08:37:31 crc kubenswrapper[4860]: E0123 08:37:31.203401 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2165dc89-623c-4dea-a70f-72e58df50ef6" containerName="docker-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.203410 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2165dc89-623c-4dea-a70f-72e58df50ef6" containerName="docker-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.203535 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2165dc89-623c-4dea-a70f-72e58df50ef6" containerName="docker-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.204484 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.206392 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.206719 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.208504 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.208554 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-2dlqm" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.219401 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.287466 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.287551 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.287579 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.287599 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.287619 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.287637 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.287677 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cd0fd159-b3a4-4684-a979-08fab1cae207-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.287796 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/cd0fd159-b3a4-4684-a979-08fab1cae207-builder-dockercfg-2dlqm-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.287825 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.287853 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/cd0fd159-b3a4-4684-a979-08fab1cae207-builder-dockercfg-2dlqm-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.287878 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cd0fd159-b3a4-4684-a979-08fab1cae207-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.287909 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rwvv\" (UniqueName: \"kubernetes.io/projected/cd0fd159-b3a4-4684-a979-08fab1cae207-kube-api-access-2rwvv\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.389250 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.389350 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.389391 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.389418 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.389444 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.389468 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.389493 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cd0fd159-b3a4-4684-a979-08fab1cae207-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.389520 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/cd0fd159-b3a4-4684-a979-08fab1cae207-builder-dockercfg-2dlqm-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.389543 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.389572 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/cd0fd159-b3a4-4684-a979-08fab1cae207-builder-dockercfg-2dlqm-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.389596 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cd0fd159-b3a4-4684-a979-08fab1cae207-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.389623 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rwvv\" (UniqueName: \"kubernetes.io/projected/cd0fd159-b3a4-4684-a979-08fab1cae207-kube-api-access-2rwvv\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.389683 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.389873 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.389907 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cd0fd159-b3a4-4684-a979-08fab1cae207-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.390424 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.390533 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.390666 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.390766 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cd0fd159-b3a4-4684-a979-08fab1cae207-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.391574 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.391817 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.394606 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/cd0fd159-b3a4-4684-a979-08fab1cae207-builder-dockercfg-2dlqm-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.401623 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/cd0fd159-b3a4-4684-a979-08fab1cae207-builder-dockercfg-2dlqm-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.406250 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rwvv\" (UniqueName: \"kubernetes.io/projected/cd0fd159-b3a4-4684-a979-08fab1cae207-kube-api-access-2rwvv\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.520277 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.668306 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2165dc89-623c-4dea-a70f-72e58df50ef6" path="/var/lib/kubelet/pods/2165dc89-623c-4dea-a70f-72e58df50ef6/volumes" Jan 23 08:37:31 crc kubenswrapper[4860]: I0123 08:37:31.944793 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 23 08:37:32 crc kubenswrapper[4860]: I0123 08:37:32.452483 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"cd0fd159-b3a4-4684-a979-08fab1cae207","Type":"ContainerStarted","Data":"93e2a81d196b282796f455ecf679fd4d3504725533efabc9c9d2732ed6a59518"} Jan 23 08:37:32 crc kubenswrapper[4860]: I0123 08:37:32.452789 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"cd0fd159-b3a4-4684-a979-08fab1cae207","Type":"ContainerStarted","Data":"4a994d16a2686d8f8b8b3fa0db7db1515ec334dd6207a82b9aa4de747544f97b"} Jan 23 08:37:33 crc kubenswrapper[4860]: I0123 08:37:33.461264 4860 generic.go:334] "Generic (PLEG): container finished" podID="cd0fd159-b3a4-4684-a979-08fab1cae207" containerID="93e2a81d196b282796f455ecf679fd4d3504725533efabc9c9d2732ed6a59518" exitCode=0 Jan 23 08:37:33 crc kubenswrapper[4860]: I0123 08:37:33.461309 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"cd0fd159-b3a4-4684-a979-08fab1cae207","Type":"ContainerDied","Data":"93e2a81d196b282796f455ecf679fd4d3504725533efabc9c9d2732ed6a59518"} Jan 23 08:37:34 crc kubenswrapper[4860]: I0123 08:37:34.470385 4860 generic.go:334] "Generic (PLEG): container finished" podID="cd0fd159-b3a4-4684-a979-08fab1cae207" containerID="e3cf7204ca9eb8ff60499db6789d53075ffb71d03508262226bd43c72fc39b4d" exitCode=0 Jan 23 08:37:34 crc kubenswrapper[4860]: I0123 08:37:34.470466 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"cd0fd159-b3a4-4684-a979-08fab1cae207","Type":"ContainerDied","Data":"e3cf7204ca9eb8ff60499db6789d53075ffb71d03508262226bd43c72fc39b4d"} Jan 23 08:37:34 crc kubenswrapper[4860]: I0123 08:37:34.518615 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_cd0fd159-b3a4-4684-a979-08fab1cae207/manage-dockerfile/0.log" Jan 23 08:37:35 crc kubenswrapper[4860]: I0123 08:37:35.480828 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"cd0fd159-b3a4-4684-a979-08fab1cae207","Type":"ContainerStarted","Data":"18f1223982de0c3331b4e79f116a0d7ba7f8f7f52d949a8fa3212d1f4e70e9a6"} Jan 23 08:37:35 crc kubenswrapper[4860]: I0123 08:37:35.511266 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=4.511244774 podStartE2EDuration="4.511244774s" podCreationTimestamp="2026-01-23 08:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:37:35.50942152 +0000 UTC m=+1362.137471705" watchObservedRunningTime="2026-01-23 08:37:35.511244774 +0000 UTC m=+1362.139294959" Jan 23 08:38:26 crc kubenswrapper[4860]: I0123 08:38:26.775971 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:38:26 crc kubenswrapper[4860]: I0123 08:38:26.776630 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:38:38 crc kubenswrapper[4860]: I0123 08:38:38.877510 4860 generic.go:334] "Generic (PLEG): container finished" podID="cd0fd159-b3a4-4684-a979-08fab1cae207" containerID="18f1223982de0c3331b4e79f116a0d7ba7f8f7f52d949a8fa3212d1f4e70e9a6" exitCode=0 Jan 23 08:38:38 crc kubenswrapper[4860]: I0123 08:38:38.877582 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"cd0fd159-b3a4-4684-a979-08fab1cae207","Type":"ContainerDied","Data":"18f1223982de0c3331b4e79f116a0d7ba7f8f7f52d949a8fa3212d1f4e70e9a6"} Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.148783 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.337563 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/cd0fd159-b3a4-4684-a979-08fab1cae207-builder-dockercfg-2dlqm-push\") pod \"cd0fd159-b3a4-4684-a979-08fab1cae207\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.337613 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-buildworkdir\") pod \"cd0fd159-b3a4-4684-a979-08fab1cae207\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.337636 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-proxy-ca-bundles\") pod \"cd0fd159-b3a4-4684-a979-08fab1cae207\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.337669 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rwvv\" (UniqueName: \"kubernetes.io/projected/cd0fd159-b3a4-4684-a979-08fab1cae207-kube-api-access-2rwvv\") pod \"cd0fd159-b3a4-4684-a979-08fab1cae207\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.337713 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cd0fd159-b3a4-4684-a979-08fab1cae207-node-pullsecrets\") pod \"cd0fd159-b3a4-4684-a979-08fab1cae207\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.337731 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-build-blob-cache\") pod \"cd0fd159-b3a4-4684-a979-08fab1cae207\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.337774 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-container-storage-root\") pod \"cd0fd159-b3a4-4684-a979-08fab1cae207\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.337792 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-ca-bundles\") pod \"cd0fd159-b3a4-4684-a979-08fab1cae207\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.337832 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd0fd159-b3a4-4684-a979-08fab1cae207-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cd0fd159-b3a4-4684-a979-08fab1cae207" (UID: "cd0fd159-b3a4-4684-a979-08fab1cae207"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.338495 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cd0fd159-b3a4-4684-a979-08fab1cae207" (UID: "cd0fd159-b3a4-4684-a979-08fab1cae207"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.338584 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cd0fd159-b3a4-4684-a979-08fab1cae207" (UID: "cd0fd159-b3a4-4684-a979-08fab1cae207"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.340328 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-system-configs\") pod \"cd0fd159-b3a4-4684-a979-08fab1cae207\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.340385 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cd0fd159-b3a4-4684-a979-08fab1cae207-buildcachedir\") pod \"cd0fd159-b3a4-4684-a979-08fab1cae207\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.340449 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-container-storage-run\") pod \"cd0fd159-b3a4-4684-a979-08fab1cae207\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.340487 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/cd0fd159-b3a4-4684-a979-08fab1cae207-builder-dockercfg-2dlqm-pull\") pod \"cd0fd159-b3a4-4684-a979-08fab1cae207\" (UID: \"cd0fd159-b3a4-4684-a979-08fab1cae207\") " Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.340688 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cd0fd159-b3a4-4684-a979-08fab1cae207" (UID: "cd0fd159-b3a4-4684-a979-08fab1cae207"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.340802 4860 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cd0fd159-b3a4-4684-a979-08fab1cae207-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.340819 4860 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.340832 4860 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.340843 4860 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.340870 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd0fd159-b3a4-4684-a979-08fab1cae207-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cd0fd159-b3a4-4684-a979-08fab1cae207" (UID: "cd0fd159-b3a4-4684-a979-08fab1cae207"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.341786 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cd0fd159-b3a4-4684-a979-08fab1cae207" (UID: "cd0fd159-b3a4-4684-a979-08fab1cae207"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.341944 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cd0fd159-b3a4-4684-a979-08fab1cae207" (UID: "cd0fd159-b3a4-4684-a979-08fab1cae207"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.344044 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0fd159-b3a4-4684-a979-08fab1cae207-builder-dockercfg-2dlqm-push" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-push") pod "cd0fd159-b3a4-4684-a979-08fab1cae207" (UID: "cd0fd159-b3a4-4684-a979-08fab1cae207"). InnerVolumeSpecName "builder-dockercfg-2dlqm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.353216 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0fd159-b3a4-4684-a979-08fab1cae207-kube-api-access-2rwvv" (OuterVolumeSpecName: "kube-api-access-2rwvv") pod "cd0fd159-b3a4-4684-a979-08fab1cae207" (UID: "cd0fd159-b3a4-4684-a979-08fab1cae207"). InnerVolumeSpecName "kube-api-access-2rwvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.354216 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0fd159-b3a4-4684-a979-08fab1cae207-builder-dockercfg-2dlqm-pull" (OuterVolumeSpecName: "builder-dockercfg-2dlqm-pull") pod "cd0fd159-b3a4-4684-a979-08fab1cae207" (UID: "cd0fd159-b3a4-4684-a979-08fab1cae207"). InnerVolumeSpecName "builder-dockercfg-2dlqm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.445335 4860 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cd0fd159-b3a4-4684-a979-08fab1cae207-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.445371 4860 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cd0fd159-b3a4-4684-a979-08fab1cae207-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.445385 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.445397 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-pull\" (UniqueName: \"kubernetes.io/secret/cd0fd159-b3a4-4684-a979-08fab1cae207-builder-dockercfg-2dlqm-pull\") on node \"crc\" DevicePath \"\"" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.445411 4860 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-2dlqm-push\" (UniqueName: \"kubernetes.io/secret/cd0fd159-b3a4-4684-a979-08fab1cae207-builder-dockercfg-2dlqm-push\") on node \"crc\" DevicePath \"\"" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.445422 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rwvv\" (UniqueName: \"kubernetes.io/projected/cd0fd159-b3a4-4684-a979-08fab1cae207-kube-api-access-2rwvv\") on node \"crc\" DevicePath \"\"" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.446048 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cd0fd159-b3a4-4684-a979-08fab1cae207" (UID: "cd0fd159-b3a4-4684-a979-08fab1cae207"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.546349 4860 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.895759 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"cd0fd159-b3a4-4684-a979-08fab1cae207","Type":"ContainerDied","Data":"4a994d16a2686d8f8b8b3fa0db7db1515ec334dd6207a82b9aa4de747544f97b"} Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.895814 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a994d16a2686d8f8b8b3fa0db7db1515ec334dd6207a82b9aa4de747544f97b" Jan 23 08:38:40 crc kubenswrapper[4860]: I0123 08:38:40.895944 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 08:38:41 crc kubenswrapper[4860]: I0123 08:38:41.195928 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cd0fd159-b3a4-4684-a979-08fab1cae207" (UID: "cd0fd159-b3a4-4684-a979-08fab1cae207"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:38:41 crc kubenswrapper[4860]: I0123 08:38:41.258329 4860 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cd0fd159-b3a4-4684-a979-08fab1cae207-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 08:38:46 crc kubenswrapper[4860]: I0123 08:38:46.452619 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-c6d49cd4c-gg9cf"] Jan 23 08:38:46 crc kubenswrapper[4860]: E0123 08:38:46.453213 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0fd159-b3a4-4684-a979-08fab1cae207" containerName="manage-dockerfile" Jan 23 08:38:46 crc kubenswrapper[4860]: I0123 08:38:46.453230 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0fd159-b3a4-4684-a979-08fab1cae207" containerName="manage-dockerfile" Jan 23 08:38:46 crc kubenswrapper[4860]: E0123 08:38:46.453255 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0fd159-b3a4-4684-a979-08fab1cae207" containerName="git-clone" Jan 23 08:38:46 crc kubenswrapper[4860]: I0123 08:38:46.453262 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0fd159-b3a4-4684-a979-08fab1cae207" containerName="git-clone" Jan 23 08:38:46 crc kubenswrapper[4860]: E0123 08:38:46.453276 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0fd159-b3a4-4684-a979-08fab1cae207" containerName="docker-build" Jan 23 08:38:46 crc kubenswrapper[4860]: I0123 08:38:46.453283 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0fd159-b3a4-4684-a979-08fab1cae207" containerName="docker-build" Jan 23 08:38:46 crc kubenswrapper[4860]: I0123 08:38:46.453419 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0fd159-b3a4-4684-a979-08fab1cae207" containerName="docker-build" Jan 23 08:38:46 crc kubenswrapper[4860]: I0123 08:38:46.453905 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-c6d49cd4c-gg9cf" Jan 23 08:38:46 crc kubenswrapper[4860]: I0123 08:38:46.455625 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-bk22c" Jan 23 08:38:46 crc kubenswrapper[4860]: I0123 08:38:46.463203 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-c6d49cd4c-gg9cf"] Jan 23 08:38:46 crc kubenswrapper[4860]: I0123 08:38:46.637067 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c95tt\" (UniqueName: \"kubernetes.io/projected/69a7a482-ef66-4336-9668-20669049a86b-kube-api-access-c95tt\") pod \"smart-gateway-operator-c6d49cd4c-gg9cf\" (UID: \"69a7a482-ef66-4336-9668-20669049a86b\") " pod="service-telemetry/smart-gateway-operator-c6d49cd4c-gg9cf" Jan 23 08:38:46 crc kubenswrapper[4860]: I0123 08:38:46.637253 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/69a7a482-ef66-4336-9668-20669049a86b-runner\") pod \"smart-gateway-operator-c6d49cd4c-gg9cf\" (UID: \"69a7a482-ef66-4336-9668-20669049a86b\") " pod="service-telemetry/smart-gateway-operator-c6d49cd4c-gg9cf" Jan 23 08:38:46 crc kubenswrapper[4860]: I0123 08:38:46.738390 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/69a7a482-ef66-4336-9668-20669049a86b-runner\") pod \"smart-gateway-operator-c6d49cd4c-gg9cf\" (UID: \"69a7a482-ef66-4336-9668-20669049a86b\") " pod="service-telemetry/smart-gateway-operator-c6d49cd4c-gg9cf" Jan 23 08:38:46 crc kubenswrapper[4860]: I0123 08:38:46.738446 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c95tt\" (UniqueName: \"kubernetes.io/projected/69a7a482-ef66-4336-9668-20669049a86b-kube-api-access-c95tt\") pod \"smart-gateway-operator-c6d49cd4c-gg9cf\" (UID: \"69a7a482-ef66-4336-9668-20669049a86b\") " pod="service-telemetry/smart-gateway-operator-c6d49cd4c-gg9cf" Jan 23 08:38:46 crc kubenswrapper[4860]: I0123 08:38:46.738863 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/69a7a482-ef66-4336-9668-20669049a86b-runner\") pod \"smart-gateway-operator-c6d49cd4c-gg9cf\" (UID: \"69a7a482-ef66-4336-9668-20669049a86b\") " pod="service-telemetry/smart-gateway-operator-c6d49cd4c-gg9cf" Jan 23 08:38:46 crc kubenswrapper[4860]: I0123 08:38:46.757909 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c95tt\" (UniqueName: \"kubernetes.io/projected/69a7a482-ef66-4336-9668-20669049a86b-kube-api-access-c95tt\") pod \"smart-gateway-operator-c6d49cd4c-gg9cf\" (UID: \"69a7a482-ef66-4336-9668-20669049a86b\") " pod="service-telemetry/smart-gateway-operator-c6d49cd4c-gg9cf" Jan 23 08:38:46 crc kubenswrapper[4860]: I0123 08:38:46.865466 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-c6d49cd4c-gg9cf" Jan 23 08:38:47 crc kubenswrapper[4860]: I0123 08:38:47.265963 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-c6d49cd4c-gg9cf"] Jan 23 08:38:47 crc kubenswrapper[4860]: I0123 08:38:47.939058 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-c6d49cd4c-gg9cf" event={"ID":"69a7a482-ef66-4336-9668-20669049a86b","Type":"ContainerStarted","Data":"32329f12c33a98b5a1276c9e4a878d86e0597343b1d631ba893f97ef641d77b0"} Jan 23 08:38:51 crc kubenswrapper[4860]: I0123 08:38:51.967341 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-fc4d6dcb5-bxskz"] Jan 23 08:38:51 crc kubenswrapper[4860]: I0123 08:38:51.968799 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-fc4d6dcb5-bxskz" Jan 23 08:38:51 crc kubenswrapper[4860]: I0123 08:38:51.971033 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-kwv96" Jan 23 08:38:51 crc kubenswrapper[4860]: I0123 08:38:51.980940 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-fc4d6dcb5-bxskz"] Jan 23 08:38:52 crc kubenswrapper[4860]: I0123 08:38:52.125392 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/de675024-06bf-4117-a893-ad173f468f8f-runner\") pod \"service-telemetry-operator-fc4d6dcb5-bxskz\" (UID: \"de675024-06bf-4117-a893-ad173f468f8f\") " pod="service-telemetry/service-telemetry-operator-fc4d6dcb5-bxskz" Jan 23 08:38:52 crc kubenswrapper[4860]: I0123 08:38:52.125500 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv24v\" (UniqueName: \"kubernetes.io/projected/de675024-06bf-4117-a893-ad173f468f8f-kube-api-access-sv24v\") pod \"service-telemetry-operator-fc4d6dcb5-bxskz\" (UID: \"de675024-06bf-4117-a893-ad173f468f8f\") " pod="service-telemetry/service-telemetry-operator-fc4d6dcb5-bxskz" Jan 23 08:38:52 crc kubenswrapper[4860]: I0123 08:38:52.226642 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/de675024-06bf-4117-a893-ad173f468f8f-runner\") pod \"service-telemetry-operator-fc4d6dcb5-bxskz\" (UID: \"de675024-06bf-4117-a893-ad173f468f8f\") " pod="service-telemetry/service-telemetry-operator-fc4d6dcb5-bxskz" Jan 23 08:38:52 crc kubenswrapper[4860]: I0123 08:38:52.226761 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv24v\" (UniqueName: \"kubernetes.io/projected/de675024-06bf-4117-a893-ad173f468f8f-kube-api-access-sv24v\") pod \"service-telemetry-operator-fc4d6dcb5-bxskz\" (UID: \"de675024-06bf-4117-a893-ad173f468f8f\") " pod="service-telemetry/service-telemetry-operator-fc4d6dcb5-bxskz" Jan 23 08:38:52 crc kubenswrapper[4860]: I0123 08:38:52.227496 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/de675024-06bf-4117-a893-ad173f468f8f-runner\") pod \"service-telemetry-operator-fc4d6dcb5-bxskz\" (UID: \"de675024-06bf-4117-a893-ad173f468f8f\") " pod="service-telemetry/service-telemetry-operator-fc4d6dcb5-bxskz" Jan 23 08:38:52 crc kubenswrapper[4860]: I0123 08:38:52.257537 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv24v\" (UniqueName: \"kubernetes.io/projected/de675024-06bf-4117-a893-ad173f468f8f-kube-api-access-sv24v\") pod \"service-telemetry-operator-fc4d6dcb5-bxskz\" (UID: \"de675024-06bf-4117-a893-ad173f468f8f\") " pod="service-telemetry/service-telemetry-operator-fc4d6dcb5-bxskz" Jan 23 08:38:52 crc kubenswrapper[4860]: I0123 08:38:52.294332 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-fc4d6dcb5-bxskz" Jan 23 08:38:56 crc kubenswrapper[4860]: I0123 08:38:56.775388 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:38:56 crc kubenswrapper[4860]: I0123 08:38:56.776087 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:39:04 crc kubenswrapper[4860]: I0123 08:39:04.430519 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-brwgz"] Jan 23 08:39:04 crc kubenswrapper[4860]: I0123 08:39:04.442210 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:04 crc kubenswrapper[4860]: I0123 08:39:04.456889 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brwgz"] Jan 23 08:39:04 crc kubenswrapper[4860]: I0123 08:39:04.589830 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e496451f-bc0c-44db-bce3-9ac670df31cb-catalog-content\") pod \"community-operators-brwgz\" (UID: \"e496451f-bc0c-44db-bce3-9ac670df31cb\") " pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:04 crc kubenswrapper[4860]: I0123 08:39:04.589912 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znnzr\" (UniqueName: \"kubernetes.io/projected/e496451f-bc0c-44db-bce3-9ac670df31cb-kube-api-access-znnzr\") pod \"community-operators-brwgz\" (UID: \"e496451f-bc0c-44db-bce3-9ac670df31cb\") " pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:04 crc kubenswrapper[4860]: I0123 08:39:04.589970 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e496451f-bc0c-44db-bce3-9ac670df31cb-utilities\") pod \"community-operators-brwgz\" (UID: \"e496451f-bc0c-44db-bce3-9ac670df31cb\") " pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:04 crc kubenswrapper[4860]: I0123 08:39:04.691190 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e496451f-bc0c-44db-bce3-9ac670df31cb-utilities\") pod \"community-operators-brwgz\" (UID: \"e496451f-bc0c-44db-bce3-9ac670df31cb\") " pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:04 crc kubenswrapper[4860]: I0123 08:39:04.691265 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e496451f-bc0c-44db-bce3-9ac670df31cb-catalog-content\") pod \"community-operators-brwgz\" (UID: \"e496451f-bc0c-44db-bce3-9ac670df31cb\") " pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:04 crc kubenswrapper[4860]: I0123 08:39:04.691296 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znnzr\" (UniqueName: \"kubernetes.io/projected/e496451f-bc0c-44db-bce3-9ac670df31cb-kube-api-access-znnzr\") pod \"community-operators-brwgz\" (UID: \"e496451f-bc0c-44db-bce3-9ac670df31cb\") " pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:04 crc kubenswrapper[4860]: I0123 08:39:04.691947 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e496451f-bc0c-44db-bce3-9ac670df31cb-utilities\") pod \"community-operators-brwgz\" (UID: \"e496451f-bc0c-44db-bce3-9ac670df31cb\") " pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:04 crc kubenswrapper[4860]: I0123 08:39:04.692186 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e496451f-bc0c-44db-bce3-9ac670df31cb-catalog-content\") pod \"community-operators-brwgz\" (UID: \"e496451f-bc0c-44db-bce3-9ac670df31cb\") " pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:04 crc kubenswrapper[4860]: I0123 08:39:04.718543 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znnzr\" (UniqueName: \"kubernetes.io/projected/e496451f-bc0c-44db-bce3-9ac670df31cb-kube-api-access-znnzr\") pod \"community-operators-brwgz\" (UID: \"e496451f-bc0c-44db-bce3-9ac670df31cb\") " pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:04 crc kubenswrapper[4860]: I0123 08:39:04.758211 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:09 crc kubenswrapper[4860]: I0123 08:39:09.161774 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brwgz"] Jan 23 08:39:09 crc kubenswrapper[4860]: I0123 08:39:09.313328 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-fc4d6dcb5-bxskz"] Jan 23 08:39:09 crc kubenswrapper[4860]: W0123 08:39:09.836483 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde675024_06bf_4117_a893_ad173f468f8f.slice/crio-db7f671ee0fe65522808720a6101a633082cf090a54da5315505b1d3517fc786 WatchSource:0}: Error finding container db7f671ee0fe65522808720a6101a633082cf090a54da5315505b1d3517fc786: Status 404 returned error can't find the container with id db7f671ee0fe65522808720a6101a633082cf090a54da5315505b1d3517fc786 Jan 23 08:39:10 crc kubenswrapper[4860]: I0123 08:39:10.082729 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-fc4d6dcb5-bxskz" event={"ID":"de675024-06bf-4117-a893-ad173f468f8f","Type":"ContainerStarted","Data":"db7f671ee0fe65522808720a6101a633082cf090a54da5315505b1d3517fc786"} Jan 23 08:39:10 crc kubenswrapper[4860]: I0123 08:39:10.084449 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brwgz" event={"ID":"e496451f-bc0c-44db-bce3-9ac670df31cb","Type":"ContainerStarted","Data":"2a5bb6de2779861e29cdcf6e595209886cda5fc4771f6aedb5d2bafa4125bb72"} Jan 23 08:39:14 crc kubenswrapper[4860]: E0123 08:39:14.588226 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/smart-gateway-operator:latest" Jan 23 08:39:14 crc kubenswrapper[4860]: E0123 08:39:14.588789 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/smart-gateway-operator:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:smart-gateway-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:ANSIBLE_VERBOSITY_SMARTGATEWAY_SMARTGATEWAY_INFRA_WATCH,Value:4,ValueFrom:nil,},EnvVar{Name:ANSIBLE_DEBUG_LOGS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CORE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BRIDGE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-bridge:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:smart-gateway-operator.v5.0.1769157521,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c95tt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod smart-gateway-operator-c6d49cd4c-gg9cf_service-telemetry(69a7a482-ef66-4336-9668-20669049a86b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:39:14 crc kubenswrapper[4860]: E0123 08:39:14.590343 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/smart-gateway-operator-c6d49cd4c-gg9cf" podUID="69a7a482-ef66-4336-9668-20669049a86b" Jan 23 08:39:15 crc kubenswrapper[4860]: I0123 08:39:15.117727 4860 generic.go:334] "Generic (PLEG): container finished" podID="e496451f-bc0c-44db-bce3-9ac670df31cb" containerID="60f9bc8335ff0c151c07da083941f2015a736992fe35c1dbc9a45d71fc3c64c0" exitCode=0 Jan 23 08:39:15 crc kubenswrapper[4860]: I0123 08:39:15.118010 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brwgz" event={"ID":"e496451f-bc0c-44db-bce3-9ac670df31cb","Type":"ContainerDied","Data":"60f9bc8335ff0c151c07da083941f2015a736992fe35c1dbc9a45d71fc3c64c0"} Jan 23 08:39:15 crc kubenswrapper[4860]: E0123 08:39:15.119551 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/smart-gateway-operator:latest\\\"\"" pod="service-telemetry/smart-gateway-operator-c6d49cd4c-gg9cf" podUID="69a7a482-ef66-4336-9668-20669049a86b" Jan 23 08:39:16 crc kubenswrapper[4860]: I0123 08:39:16.139501 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brwgz" event={"ID":"e496451f-bc0c-44db-bce3-9ac670df31cb","Type":"ContainerStarted","Data":"cf33efd1a921c43570dcb5d88fc8c5f074ff4363243ab87636329b453b07a6eb"} Jan 23 08:39:17 crc kubenswrapper[4860]: I0123 08:39:17.148538 4860 generic.go:334] "Generic (PLEG): container finished" podID="e496451f-bc0c-44db-bce3-9ac670df31cb" containerID="cf33efd1a921c43570dcb5d88fc8c5f074ff4363243ab87636329b453b07a6eb" exitCode=0 Jan 23 08:39:17 crc kubenswrapper[4860]: I0123 08:39:17.148588 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brwgz" event={"ID":"e496451f-bc0c-44db-bce3-9ac670df31cb","Type":"ContainerDied","Data":"cf33efd1a921c43570dcb5d88fc8c5f074ff4363243ab87636329b453b07a6eb"} Jan 23 08:39:20 crc kubenswrapper[4860]: I0123 08:39:20.169270 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brwgz" event={"ID":"e496451f-bc0c-44db-bce3-9ac670df31cb","Type":"ContainerStarted","Data":"4eebab656ed73d46bb187f96faf7700033015730f39bb0849d6923ba6e7b056e"} Jan 23 08:39:20 crc kubenswrapper[4860]: I0123 08:39:20.172268 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-fc4d6dcb5-bxskz" event={"ID":"de675024-06bf-4117-a893-ad173f468f8f","Type":"ContainerStarted","Data":"6497d22e794ae3941bf050782eaa5434dcac699a50994af8fe137cc30d487dad"} Jan 23 08:39:20 crc kubenswrapper[4860]: I0123 08:39:20.190347 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-brwgz" podStartSLOduration=11.556493052 podStartE2EDuration="16.190326163s" podCreationTimestamp="2026-01-23 08:39:04 +0000 UTC" firstStartedPulling="2026-01-23 08:39:15.119808393 +0000 UTC m=+1461.747858578" lastFinishedPulling="2026-01-23 08:39:19.753641504 +0000 UTC m=+1466.381691689" observedRunningTime="2026-01-23 08:39:20.185551577 +0000 UTC m=+1466.813601772" watchObservedRunningTime="2026-01-23 08:39:20.190326163 +0000 UTC m=+1466.818376348" Jan 23 08:39:20 crc kubenswrapper[4860]: I0123 08:39:20.210345 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-fc4d6dcb5-bxskz" podStartSLOduration=19.634601629 podStartE2EDuration="29.21032672s" podCreationTimestamp="2026-01-23 08:38:51 +0000 UTC" firstStartedPulling="2026-01-23 08:39:09.839409164 +0000 UTC m=+1456.467459349" lastFinishedPulling="2026-01-23 08:39:19.415134255 +0000 UTC m=+1466.043184440" observedRunningTime="2026-01-23 08:39:20.205728178 +0000 UTC m=+1466.833778373" watchObservedRunningTime="2026-01-23 08:39:20.21032672 +0000 UTC m=+1466.838376905" Jan 23 08:39:24 crc kubenswrapper[4860]: I0123 08:39:24.758828 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:24 crc kubenswrapper[4860]: I0123 08:39:24.759473 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:24 crc kubenswrapper[4860]: I0123 08:39:24.806354 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:25 crc kubenswrapper[4860]: I0123 08:39:25.239926 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:25 crc kubenswrapper[4860]: I0123 08:39:25.278756 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brwgz"] Jan 23 08:39:26 crc kubenswrapper[4860]: I0123 08:39:26.777637 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:39:26 crc kubenswrapper[4860]: I0123 08:39:26.777958 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:39:26 crc kubenswrapper[4860]: I0123 08:39:26.778006 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:39:26 crc kubenswrapper[4860]: I0123 08:39:26.778642 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cfb686e44ef0e61ba024387441d8514e0c284409a4a0bfcf8b79aaad27b5ee16"} pod="openshift-machine-config-operator/machine-config-daemon-tk8df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:39:26 crc kubenswrapper[4860]: I0123 08:39:26.778711 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" containerID="cri-o://cfb686e44ef0e61ba024387441d8514e0c284409a4a0bfcf8b79aaad27b5ee16" gracePeriod=600 Jan 23 08:39:27 crc kubenswrapper[4860]: I0123 08:39:27.209462 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-brwgz" podUID="e496451f-bc0c-44db-bce3-9ac670df31cb" containerName="registry-server" containerID="cri-o://4eebab656ed73d46bb187f96faf7700033015730f39bb0849d6923ba6e7b056e" gracePeriod=2 Jan 23 08:39:30 crc kubenswrapper[4860]: I0123 08:39:30.228157 4860 generic.go:334] "Generic (PLEG): container finished" podID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerID="cfb686e44ef0e61ba024387441d8514e0c284409a4a0bfcf8b79aaad27b5ee16" exitCode=0 Jan 23 08:39:30 crc kubenswrapper[4860]: I0123 08:39:30.228257 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerDied","Data":"cfb686e44ef0e61ba024387441d8514e0c284409a4a0bfcf8b79aaad27b5ee16"} Jan 23 08:39:30 crc kubenswrapper[4860]: I0123 08:39:30.228567 4860 scope.go:117] "RemoveContainer" containerID="0c5e138e24069ddd965b8c6e8f41ef692b2a1185591867905ae8eb8f0de42c68" Jan 23 08:39:31 crc kubenswrapper[4860]: I0123 08:39:31.237822 4860 generic.go:334] "Generic (PLEG): container finished" podID="e496451f-bc0c-44db-bce3-9ac670df31cb" containerID="4eebab656ed73d46bb187f96faf7700033015730f39bb0849d6923ba6e7b056e" exitCode=0 Jan 23 08:39:31 crc kubenswrapper[4860]: I0123 08:39:31.237894 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brwgz" event={"ID":"e496451f-bc0c-44db-bce3-9ac670df31cb","Type":"ContainerDied","Data":"4eebab656ed73d46bb187f96faf7700033015730f39bb0849d6923ba6e7b056e"} Jan 23 08:39:35 crc kubenswrapper[4860]: E0123 08:39:34.760484 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4eebab656ed73d46bb187f96faf7700033015730f39bb0849d6923ba6e7b056e is running failed: container process not found" containerID="4eebab656ed73d46bb187f96faf7700033015730f39bb0849d6923ba6e7b056e" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 08:39:35 crc kubenswrapper[4860]: E0123 08:39:34.761006 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4eebab656ed73d46bb187f96faf7700033015730f39bb0849d6923ba6e7b056e is running failed: container process not found" containerID="4eebab656ed73d46bb187f96faf7700033015730f39bb0849d6923ba6e7b056e" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 08:39:35 crc kubenswrapper[4860]: E0123 08:39:34.761408 4860 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4eebab656ed73d46bb187f96faf7700033015730f39bb0849d6923ba6e7b056e is running failed: container process not found" containerID="4eebab656ed73d46bb187f96faf7700033015730f39bb0849d6923ba6e7b056e" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 08:39:35 crc kubenswrapper[4860]: E0123 08:39:34.761459 4860 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4eebab656ed73d46bb187f96faf7700033015730f39bb0849d6923ba6e7b056e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-brwgz" podUID="e496451f-bc0c-44db-bce3-9ac670df31cb" containerName="registry-server" Jan 23 08:39:35 crc kubenswrapper[4860]: I0123 08:39:35.705046 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:35 crc kubenswrapper[4860]: I0123 08:39:35.836005 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znnzr\" (UniqueName: \"kubernetes.io/projected/e496451f-bc0c-44db-bce3-9ac670df31cb-kube-api-access-znnzr\") pod \"e496451f-bc0c-44db-bce3-9ac670df31cb\" (UID: \"e496451f-bc0c-44db-bce3-9ac670df31cb\") " Jan 23 08:39:35 crc kubenswrapper[4860]: I0123 08:39:35.836113 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e496451f-bc0c-44db-bce3-9ac670df31cb-catalog-content\") pod \"e496451f-bc0c-44db-bce3-9ac670df31cb\" (UID: \"e496451f-bc0c-44db-bce3-9ac670df31cb\") " Jan 23 08:39:35 crc kubenswrapper[4860]: I0123 08:39:35.836170 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e496451f-bc0c-44db-bce3-9ac670df31cb-utilities\") pod \"e496451f-bc0c-44db-bce3-9ac670df31cb\" (UID: \"e496451f-bc0c-44db-bce3-9ac670df31cb\") " Jan 23 08:39:35 crc kubenswrapper[4860]: I0123 08:39:35.837610 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e496451f-bc0c-44db-bce3-9ac670df31cb-utilities" (OuterVolumeSpecName: "utilities") pod "e496451f-bc0c-44db-bce3-9ac670df31cb" (UID: "e496451f-bc0c-44db-bce3-9ac670df31cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:39:35 crc kubenswrapper[4860]: I0123 08:39:35.849234 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e496451f-bc0c-44db-bce3-9ac670df31cb-kube-api-access-znnzr" (OuterVolumeSpecName: "kube-api-access-znnzr") pod "e496451f-bc0c-44db-bce3-9ac670df31cb" (UID: "e496451f-bc0c-44db-bce3-9ac670df31cb"). InnerVolumeSpecName "kube-api-access-znnzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:39:35 crc kubenswrapper[4860]: I0123 08:39:35.889982 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e496451f-bc0c-44db-bce3-9ac670df31cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e496451f-bc0c-44db-bce3-9ac670df31cb" (UID: "e496451f-bc0c-44db-bce3-9ac670df31cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:39:35 crc kubenswrapper[4860]: I0123 08:39:35.937429 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znnzr\" (UniqueName: \"kubernetes.io/projected/e496451f-bc0c-44db-bce3-9ac670df31cb-kube-api-access-znnzr\") on node \"crc\" DevicePath \"\"" Jan 23 08:39:35 crc kubenswrapper[4860]: I0123 08:39:35.937694 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e496451f-bc0c-44db-bce3-9ac670df31cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:39:35 crc kubenswrapper[4860]: I0123 08:39:35.937704 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e496451f-bc0c-44db-bce3-9ac670df31cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:39:36 crc kubenswrapper[4860]: I0123 08:39:36.268957 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerStarted","Data":"0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47"} Jan 23 08:39:36 crc kubenswrapper[4860]: I0123 08:39:36.271159 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brwgz" event={"ID":"e496451f-bc0c-44db-bce3-9ac670df31cb","Type":"ContainerDied","Data":"2a5bb6de2779861e29cdcf6e595209886cda5fc4771f6aedb5d2bafa4125bb72"} Jan 23 08:39:36 crc kubenswrapper[4860]: I0123 08:39:36.271185 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brwgz" Jan 23 08:39:36 crc kubenswrapper[4860]: I0123 08:39:36.271208 4860 scope.go:117] "RemoveContainer" containerID="4eebab656ed73d46bb187f96faf7700033015730f39bb0849d6923ba6e7b056e" Jan 23 08:39:36 crc kubenswrapper[4860]: I0123 08:39:36.272589 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-c6d49cd4c-gg9cf" event={"ID":"69a7a482-ef66-4336-9668-20669049a86b","Type":"ContainerStarted","Data":"43c5d85a1bd0d100ae4fa5b53d8c1a344261bfdee377e63bb33b8c456380fc93"} Jan 23 08:39:36 crc kubenswrapper[4860]: I0123 08:39:36.296251 4860 scope.go:117] "RemoveContainer" containerID="cf33efd1a921c43570dcb5d88fc8c5f074ff4363243ab87636329b453b07a6eb" Jan 23 08:39:36 crc kubenswrapper[4860]: I0123 08:39:36.307134 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-c6d49cd4c-gg9cf" podStartSLOduration=2.227403098 podStartE2EDuration="50.30711529s" podCreationTimestamp="2026-01-23 08:38:46 +0000 UTC" firstStartedPulling="2026-01-23 08:38:47.276252426 +0000 UTC m=+1433.904302611" lastFinishedPulling="2026-01-23 08:39:35.355964628 +0000 UTC m=+1481.984014803" observedRunningTime="2026-01-23 08:39:36.300466869 +0000 UTC m=+1482.928517054" watchObservedRunningTime="2026-01-23 08:39:36.30711529 +0000 UTC m=+1482.935165475" Jan 23 08:39:36 crc kubenswrapper[4860]: I0123 08:39:36.318359 4860 scope.go:117] "RemoveContainer" containerID="60f9bc8335ff0c151c07da083941f2015a736992fe35c1dbc9a45d71fc3c64c0" Jan 23 08:39:36 crc kubenswrapper[4860]: I0123 08:39:36.324123 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brwgz"] Jan 23 08:39:36 crc kubenswrapper[4860]: I0123 08:39:36.329216 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-brwgz"] Jan 23 08:39:37 crc kubenswrapper[4860]: I0123 08:39:37.665462 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e496451f-bc0c-44db-bce3-9ac670df31cb" path="/var/lib/kubelet/pods/e496451f-bc0c-44db-bce3-9ac670df31cb/volumes" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.207739 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zhhlg"] Jan 23 08:39:46 crc kubenswrapper[4860]: E0123 08:39:46.208530 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e496451f-bc0c-44db-bce3-9ac670df31cb" containerName="registry-server" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.208546 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e496451f-bc0c-44db-bce3-9ac670df31cb" containerName="registry-server" Jan 23 08:39:46 crc kubenswrapper[4860]: E0123 08:39:46.208556 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e496451f-bc0c-44db-bce3-9ac670df31cb" containerName="extract-utilities" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.208563 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e496451f-bc0c-44db-bce3-9ac670df31cb" containerName="extract-utilities" Jan 23 08:39:46 crc kubenswrapper[4860]: E0123 08:39:46.208570 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e496451f-bc0c-44db-bce3-9ac670df31cb" containerName="extract-content" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.208578 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="e496451f-bc0c-44db-bce3-9ac670df31cb" containerName="extract-content" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.208688 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="e496451f-bc0c-44db-bce3-9ac670df31cb" containerName="registry-server" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.209173 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.215100 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.215825 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-wxx6s" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.216626 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.216971 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.217133 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.217708 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.218995 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.225796 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zhhlg"] Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.284949 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqgbn\" (UniqueName: \"kubernetes.io/projected/a45edd37-9f00-44b2-be28-7891a2ee264e-kube-api-access-bqgbn\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.285032 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.285068 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.285142 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.285180 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-sasl-users\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.285225 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.285259 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a45edd37-9f00-44b2-be28-7891a2ee264e-sasl-config\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.386295 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.386370 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-sasl-users\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.386416 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.386455 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a45edd37-9f00-44b2-be28-7891a2ee264e-sasl-config\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.387703 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a45edd37-9f00-44b2-be28-7891a2ee264e-sasl-config\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.386501 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqgbn\" (UniqueName: \"kubernetes.io/projected/a45edd37-9f00-44b2-be28-7891a2ee264e-kube-api-access-bqgbn\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.387819 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.388270 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.392144 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.392153 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.393031 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.393309 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.401941 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-sasl-users\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.403551 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqgbn\" (UniqueName: \"kubernetes.io/projected/a45edd37-9f00-44b2-be28-7891a2ee264e-kube-api-access-bqgbn\") pod \"default-interconnect-68864d46cb-zhhlg\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.528982 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:39:46 crc kubenswrapper[4860]: I0123 08:39:46.951143 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zhhlg"] Jan 23 08:39:47 crc kubenswrapper[4860]: I0123 08:39:47.341792 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" event={"ID":"a45edd37-9f00-44b2-be28-7891a2ee264e","Type":"ContainerStarted","Data":"88e5517dc2b72f5e91a2eebf9316abd64238fd27e75611f621513a0567ab9a66"} Jan 23 08:39:53 crc kubenswrapper[4860]: I0123 08:39:53.405611 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" event={"ID":"a45edd37-9f00-44b2-be28-7891a2ee264e","Type":"ContainerStarted","Data":"df257c32859d766055541f74fe1790c8a9922f0989279cd1f31358512550138f"} Jan 23 08:39:53 crc kubenswrapper[4860]: I0123 08:39:53.423567 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" podStartSLOduration=2.083295414 podStartE2EDuration="7.42354335s" podCreationTimestamp="2026-01-23 08:39:46 +0000 UTC" firstStartedPulling="2026-01-23 08:39:46.955539204 +0000 UTC m=+1493.583589389" lastFinishedPulling="2026-01-23 08:39:52.29578714 +0000 UTC m=+1498.923837325" observedRunningTime="2026-01-23 08:39:53.423249282 +0000 UTC m=+1500.051299467" watchObservedRunningTime="2026-01-23 08:39:53.42354335 +0000 UTC m=+1500.051593535" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.592250 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.595614 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.598436 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-1" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.598592 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.598919 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.599290 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.599545 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-2" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.599652 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.599807 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-7478g" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.599923 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.600842 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.604646 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.624343 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.766476 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1ec65d6-ed8f-4008-8d71-ce558a169641-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.766572 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e1ec65d6-ed8f-4008-8d71-ce558a169641-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.766614 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1ec65d6-ed8f-4008-8d71-ce558a169641-tls-assets\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.766646 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.766845 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.767183 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e1ec65d6-ed8f-4008-8d71-ce558a169641-config-out\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.767344 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e1ec65d6-ed8f-4008-8d71-ce558a169641-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.767406 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-web-config\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.767453 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-config\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.767520 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8khfw\" (UniqueName: \"kubernetes.io/projected/e1ec65d6-ed8f-4008-8d71-ce558a169641-kube-api-access-8khfw\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.767569 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e1ec65d6-ed8f-4008-8d71-ce558a169641-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.767743 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1d57c271-6e74-4314-b8dd-5977534d6d7e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d57c271-6e74-4314-b8dd-5977534d6d7e\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.870510 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e1ec65d6-ed8f-4008-8d71-ce558a169641-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.870585 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1ec65d6-ed8f-4008-8d71-ce558a169641-tls-assets\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.870638 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.870675 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.870723 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e1ec65d6-ed8f-4008-8d71-ce558a169641-config-out\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.870782 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e1ec65d6-ed8f-4008-8d71-ce558a169641-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.870816 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-web-config\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.870853 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-config\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.870890 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e1ec65d6-ed8f-4008-8d71-ce558a169641-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.870920 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8khfw\" (UniqueName: \"kubernetes.io/projected/e1ec65d6-ed8f-4008-8d71-ce558a169641-kube-api-access-8khfw\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.870959 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1d57c271-6e74-4314-b8dd-5977534d6d7e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d57c271-6e74-4314-b8dd-5977534d6d7e\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.871039 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1ec65d6-ed8f-4008-8d71-ce558a169641-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: E0123 08:39:56.871416 4860 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.871472 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/e1ec65d6-ed8f-4008-8d71-ce558a169641-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: E0123 08:39:56.871495 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-secret-default-prometheus-proxy-tls podName:e1ec65d6-ed8f-4008-8d71-ce558a169641 nodeName:}" failed. No retries permitted until 2026-01-23 08:39:57.371478901 +0000 UTC m=+1503.999529086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "e1ec65d6-ed8f-4008-8d71-ce558a169641") : secret "default-prometheus-proxy-tls" not found Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.872194 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e1ec65d6-ed8f-4008-8d71-ce558a169641-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.872446 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/e1ec65d6-ed8f-4008-8d71-ce558a169641-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.872568 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1ec65d6-ed8f-4008-8d71-ce558a169641-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.877784 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-config\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.877784 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.877876 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-web-config\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.880005 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e1ec65d6-ed8f-4008-8d71-ce558a169641-config-out\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.881756 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1ec65d6-ed8f-4008-8d71-ce558a169641-tls-assets\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.881987 4860 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.882045 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1d57c271-6e74-4314-b8dd-5977534d6d7e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d57c271-6e74-4314-b8dd-5977534d6d7e\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9cf57560878cafa4a14947f0b7836c1f22805c711663b790e3639945cec3b0af/globalmount\"" pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.900868 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8khfw\" (UniqueName: \"kubernetes.io/projected/e1ec65d6-ed8f-4008-8d71-ce558a169641-kube-api-access-8khfw\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:56 crc kubenswrapper[4860]: I0123 08:39:56.903385 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1d57c271-6e74-4314-b8dd-5977534d6d7e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1d57c271-6e74-4314-b8dd-5977534d6d7e\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:57 crc kubenswrapper[4860]: I0123 08:39:57.377897 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:57 crc kubenswrapper[4860]: E0123 08:39:57.378184 4860 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 23 08:39:57 crc kubenswrapper[4860]: E0123 08:39:57.378295 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-secret-default-prometheus-proxy-tls podName:e1ec65d6-ed8f-4008-8d71-ce558a169641 nodeName:}" failed. No retries permitted until 2026-01-23 08:39:58.378264929 +0000 UTC m=+1505.006315154 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "e1ec65d6-ed8f-4008-8d71-ce558a169641") : secret "default-prometheus-proxy-tls" not found Jan 23 08:39:58 crc kubenswrapper[4860]: I0123 08:39:58.393691 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:58 crc kubenswrapper[4860]: I0123 08:39:58.399178 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1ec65d6-ed8f-4008-8d71-ce558a169641-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"e1ec65d6-ed8f-4008-8d71-ce558a169641\") " pod="service-telemetry/prometheus-default-0" Jan 23 08:39:58 crc kubenswrapper[4860]: I0123 08:39:58.419432 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 23 08:39:58 crc kubenswrapper[4860]: I0123 08:39:58.651970 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 23 08:39:58 crc kubenswrapper[4860]: W0123 08:39:58.666282 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1ec65d6_ed8f_4008_8d71_ce558a169641.slice/crio-e414af859ec79e10453f7bded3243ba492faa065a1d9377e52fca52bb3af7d6f WatchSource:0}: Error finding container e414af859ec79e10453f7bded3243ba492faa065a1d9377e52fca52bb3af7d6f: Status 404 returned error can't find the container with id e414af859ec79e10453f7bded3243ba492faa065a1d9377e52fca52bb3af7d6f Jan 23 08:39:59 crc kubenswrapper[4860]: I0123 08:39:59.443386 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"e1ec65d6-ed8f-4008-8d71-ce558a169641","Type":"ContainerStarted","Data":"e414af859ec79e10453f7bded3243ba492faa065a1d9377e52fca52bb3af7d6f"} Jan 23 08:40:04 crc kubenswrapper[4860]: I0123 08:40:04.494740 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"e1ec65d6-ed8f-4008-8d71-ce558a169641","Type":"ContainerStarted","Data":"ccf7c89e13336b50b867607bbe48ed3c9a5e5c4c5974dc75db730bc7411b82a1"} Jan 23 08:40:07 crc kubenswrapper[4860]: I0123 08:40:07.346104 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-cngd4"] Jan 23 08:40:07 crc kubenswrapper[4860]: I0123 08:40:07.348181 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-cngd4" Jan 23 08:40:07 crc kubenswrapper[4860]: I0123 08:40:07.357404 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-cngd4"] Jan 23 08:40:07 crc kubenswrapper[4860]: I0123 08:40:07.421771 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzvdt\" (UniqueName: \"kubernetes.io/projected/55117c66-88ef-4d9e-9e07-3c228a43a0d8-kube-api-access-jzvdt\") pod \"default-snmp-webhook-6856cfb745-cngd4\" (UID: \"55117c66-88ef-4d9e-9e07-3c228a43a0d8\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-cngd4" Jan 23 08:40:07 crc kubenswrapper[4860]: I0123 08:40:07.523320 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzvdt\" (UniqueName: \"kubernetes.io/projected/55117c66-88ef-4d9e-9e07-3c228a43a0d8-kube-api-access-jzvdt\") pod \"default-snmp-webhook-6856cfb745-cngd4\" (UID: \"55117c66-88ef-4d9e-9e07-3c228a43a0d8\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-cngd4" Jan 23 08:40:07 crc kubenswrapper[4860]: I0123 08:40:07.544597 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzvdt\" (UniqueName: \"kubernetes.io/projected/55117c66-88ef-4d9e-9e07-3c228a43a0d8-kube-api-access-jzvdt\") pod \"default-snmp-webhook-6856cfb745-cngd4\" (UID: \"55117c66-88ef-4d9e-9e07-3c228a43a0d8\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-cngd4" Jan 23 08:40:07 crc kubenswrapper[4860]: I0123 08:40:07.668116 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-cngd4" Jan 23 08:40:08 crc kubenswrapper[4860]: I0123 08:40:08.133334 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-cngd4"] Jan 23 08:40:08 crc kubenswrapper[4860]: I0123 08:40:08.524719 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-cngd4" event={"ID":"55117c66-88ef-4d9e-9e07-3c228a43a0d8","Type":"ContainerStarted","Data":"816ab81c8472b106e5b021af6487829b22e0d6a65e8273b02e171176d7dcec54"} Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.043833 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.045631 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.048135 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.048183 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.048186 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.048191 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.048244 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.048314 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-xx75m" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.056829 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.184278 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.184320 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnptl\" (UniqueName: \"kubernetes.io/projected/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-kube-api-access-mnptl\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.184346 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-tls-assets\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.184380 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.184398 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-config-volume\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.184414 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-config-out\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.184440 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2633f392-10a5-4473-9438-8002672fc106\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2633f392-10a5-4473-9438-8002672fc106\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.184561 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-web-config\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.184626 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.285908 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.287058 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-config-volume\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.287103 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-config-out\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.287144 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2633f392-10a5-4473-9438-8002672fc106\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2633f392-10a5-4473-9438-8002672fc106\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.287168 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-web-config\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.287223 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.287267 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.287303 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnptl\" (UniqueName: \"kubernetes.io/projected/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-kube-api-access-mnptl\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.287336 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-tls-assets\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: E0123 08:40:11.288708 4860 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 23 08:40:11 crc kubenswrapper[4860]: E0123 08:40:11.288796 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-secret-default-alertmanager-proxy-tls podName:8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f nodeName:}" failed. No retries permitted until 2026-01-23 08:40:11.78876787 +0000 UTC m=+1518.416818055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f") : secret "default-alertmanager-proxy-tls" not found Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.289876 4860 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.289917 4860 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2633f392-10a5-4473-9438-8002672fc106\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2633f392-10a5-4473-9438-8002672fc106\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9192847cd376b5c9818f55096faf1020d9fa0e5ff77e65aa16ca0903755759c8/globalmount\"" pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.291800 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.293429 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-config-out\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.293594 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-tls-assets\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.293676 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.298841 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-web-config\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.303649 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-config-volume\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.306258 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnptl\" (UniqueName: \"kubernetes.io/projected/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-kube-api-access-mnptl\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.316775 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2633f392-10a5-4473-9438-8002672fc106\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2633f392-10a5-4473-9438-8002672fc106\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: I0123 08:40:11.795911 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:11 crc kubenswrapper[4860]: E0123 08:40:11.796117 4860 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 23 08:40:11 crc kubenswrapper[4860]: E0123 08:40:11.796212 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-secret-default-alertmanager-proxy-tls podName:8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f nodeName:}" failed. No retries permitted until 2026-01-23 08:40:12.796193592 +0000 UTC m=+1519.424243777 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f") : secret "default-alertmanager-proxy-tls" not found Jan 23 08:40:12 crc kubenswrapper[4860]: I0123 08:40:12.549793 4860 generic.go:334] "Generic (PLEG): container finished" podID="e1ec65d6-ed8f-4008-8d71-ce558a169641" containerID="ccf7c89e13336b50b867607bbe48ed3c9a5e5c4c5974dc75db730bc7411b82a1" exitCode=0 Jan 23 08:40:12 crc kubenswrapper[4860]: I0123 08:40:12.549886 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"e1ec65d6-ed8f-4008-8d71-ce558a169641","Type":"ContainerDied","Data":"ccf7c89e13336b50b867607bbe48ed3c9a5e5c4c5974dc75db730bc7411b82a1"} Jan 23 08:40:12 crc kubenswrapper[4860]: I0123 08:40:12.814309 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:12 crc kubenswrapper[4860]: E0123 08:40:12.814578 4860 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 23 08:40:12 crc kubenswrapper[4860]: E0123 08:40:12.814818 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-secret-default-alertmanager-proxy-tls podName:8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f nodeName:}" failed. No retries permitted until 2026-01-23 08:40:14.814795263 +0000 UTC m=+1521.442845448 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f") : secret "default-alertmanager-proxy-tls" not found Jan 23 08:40:14 crc kubenswrapper[4860]: I0123 08:40:14.844819 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:14 crc kubenswrapper[4860]: I0123 08:40:14.852448 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f\") " pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:14 crc kubenswrapper[4860]: I0123 08:40:14.971354 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 23 08:40:20 crc kubenswrapper[4860]: E0123 08:40:20.468005 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/prometheus-webhook-snmp:latest" Jan 23 08:40:20 crc kubenswrapper[4860]: E0123 08:40:20.468578 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-webhook-snmp,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/prometheus-webhook-snmp:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9099,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:SNMP_COMMUNITY,Value:public,ValueFrom:nil,},EnvVar{Name:SNMP_RETRIES,Value:5,ValueFrom:nil,},EnvVar{Name:SNMP_HOST,Value:192.168.24.254,ValueFrom:nil,},EnvVar{Name:SNMP_PORT,Value:162,ValueFrom:nil,},EnvVar{Name:SNMP_TIMEOUT,Value:1,ValueFrom:nil,},EnvVar{Name:ALERT_OID_LABEL,Value:oid,ValueFrom:nil,},EnvVar{Name:TRAP_OID_PREFIX,Value:1.3.6.1.4.1.50495.15,ValueFrom:nil,},EnvVar{Name:TRAP_DEFAULT_OID,Value:1.3.6.1.4.1.50495.15.1.2.1,ValueFrom:nil,},EnvVar{Name:TRAP_DEFAULT_SEVERITY,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jzvdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod default-snmp-webhook-6856cfb745-cngd4_service-telemetry(55117c66-88ef-4d9e-9e07-3c228a43a0d8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:40:20 crc kubenswrapper[4860]: E0123 08:40:20.469753 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-webhook-snmp\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/default-snmp-webhook-6856cfb745-cngd4" podUID="55117c66-88ef-4d9e-9e07-3c228a43a0d8" Jan 23 08:40:20 crc kubenswrapper[4860]: E0123 08:40:20.612681 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-webhook-snmp\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/prometheus-webhook-snmp:latest\\\"\"" pod="service-telemetry/default-snmp-webhook-6856cfb745-cngd4" podUID="55117c66-88ef-4d9e-9e07-3c228a43a0d8" Jan 23 08:40:20 crc kubenswrapper[4860]: I0123 08:40:20.787797 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 23 08:40:21 crc kubenswrapper[4860]: I0123 08:40:21.626703 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f","Type":"ContainerStarted","Data":"b15275a04181c0995fa3d9fcde37c11b497524dd6398ab8dd4ed83b30c6ba301"} Jan 23 08:40:23 crc kubenswrapper[4860]: I0123 08:40:23.647847 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f","Type":"ContainerStarted","Data":"17c375b368b57682a8ffa43b40d45b62b3258b8b9f4ba724154cea5667cf6a8c"} Jan 23 08:40:25 crc kubenswrapper[4860]: I0123 08:40:25.666931 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"e1ec65d6-ed8f-4008-8d71-ce558a169641","Type":"ContainerStarted","Data":"604ba219dc45a219d298acf4a635824f5b358752cc48766f77c9879766ef04bd"} Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.725657 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7"] Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.726955 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.730052 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.730199 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.730335 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.731041 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-mhxsb" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.738464 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7"] Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.809755 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.809921 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.810045 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.810073 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.810103 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rb6g\" (UniqueName: \"kubernetes.io/projected/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-kube-api-access-8rb6g\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.911135 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.911216 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.911253 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.911276 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.911303 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rb6g\" (UniqueName: \"kubernetes.io/projected/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-kube-api-access-8rb6g\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.912456 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:26 crc kubenswrapper[4860]: E0123 08:40:26.912505 4860 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 23 08:40:26 crc kubenswrapper[4860]: E0123 08:40:26.912578 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-default-cloud1-coll-meter-proxy-tls podName:e79ec9dc-fb1a-4807-815d-3a74e95f0ffe nodeName:}" failed. No retries permitted until 2026-01-23 08:40:27.412559105 +0000 UTC m=+1534.040609370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" (UID: "e79ec9dc-fb1a-4807-815d-3a74e95f0ffe") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.913066 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.916785 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:26 crc kubenswrapper[4860]: I0123 08:40:26.930637 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rb6g\" (UniqueName: \"kubernetes.io/projected/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-kube-api-access-8rb6g\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:27 crc kubenswrapper[4860]: I0123 08:40:27.418357 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:27 crc kubenswrapper[4860]: E0123 08:40:27.418523 4860 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 23 08:40:27 crc kubenswrapper[4860]: E0123 08:40:27.418624 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-default-cloud1-coll-meter-proxy-tls podName:e79ec9dc-fb1a-4807-815d-3a74e95f0ffe nodeName:}" failed. No retries permitted until 2026-01-23 08:40:28.418600914 +0000 UTC m=+1535.046651099 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" (UID: "e79ec9dc-fb1a-4807-815d-3a74e95f0ffe") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 23 08:40:27 crc kubenswrapper[4860]: I0123 08:40:27.683220 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"e1ec65d6-ed8f-4008-8d71-ce558a169641","Type":"ContainerStarted","Data":"abb7689275540d9b9e22c1276d2eac3e8ce92a5fddf383d0bed73c0431d05f4b"} Jan 23 08:40:28 crc kubenswrapper[4860]: I0123 08:40:28.448771 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:28 crc kubenswrapper[4860]: I0123 08:40:28.453728 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e79ec9dc-fb1a-4807-815d-3a74e95f0ffe-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7\" (UID: \"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:28 crc kubenswrapper[4860]: I0123 08:40:28.546536 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.001999 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7"] Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.745818 4860 generic.go:334] "Generic (PLEG): container finished" podID="8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f" containerID="17c375b368b57682a8ffa43b40d45b62b3258b8b9f4ba724154cea5667cf6a8c" exitCode=0 Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.745894 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f","Type":"ContainerDied","Data":"17c375b368b57682a8ffa43b40d45b62b3258b8b9f4ba724154cea5667cf6a8c"} Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.748531 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" event={"ID":"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe","Type":"ContainerStarted","Data":"5049b7d6ce4af1d1a67ac743a3d185c0ab83a89062779495c5a39fd4dfd1960e"} Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.821560 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks"] Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.826199 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.832439 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.832685 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.833314 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks"] Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.886909 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/21111e1f-d004-4b39-b149-d7d470f5c096-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.887122 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/21111e1f-d004-4b39-b149-d7d470f5c096-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.887242 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/21111e1f-d004-4b39-b149-d7d470f5c096-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.887304 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/21111e1f-d004-4b39-b149-d7d470f5c096-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.887358 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2zvf\" (UniqueName: \"kubernetes.io/projected/21111e1f-d004-4b39-b149-d7d470f5c096-kube-api-access-b2zvf\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.989083 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/21111e1f-d004-4b39-b149-d7d470f5c096-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.989147 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/21111e1f-d004-4b39-b149-d7d470f5c096-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.989173 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2zvf\" (UniqueName: \"kubernetes.io/projected/21111e1f-d004-4b39-b149-d7d470f5c096-kube-api-access-b2zvf\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.989221 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/21111e1f-d004-4b39-b149-d7d470f5c096-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.989265 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/21111e1f-d004-4b39-b149-d7d470f5c096-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:29 crc kubenswrapper[4860]: E0123 08:40:29.989432 4860 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 23 08:40:29 crc kubenswrapper[4860]: E0123 08:40:29.989593 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21111e1f-d004-4b39-b149-d7d470f5c096-default-cloud1-ceil-meter-proxy-tls podName:21111e1f-d004-4b39-b149-d7d470f5c096 nodeName:}" failed. No retries permitted until 2026-01-23 08:40:30.489562619 +0000 UTC m=+1537.117612804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/21111e1f-d004-4b39-b149-d7d470f5c096-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" (UID: "21111e1f-d004-4b39-b149-d7d470f5c096") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.990173 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/21111e1f-d004-4b39-b149-d7d470f5c096-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:29 crc kubenswrapper[4860]: I0123 08:40:29.990408 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/21111e1f-d004-4b39-b149-d7d470f5c096-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:30 crc kubenswrapper[4860]: I0123 08:40:29.996603 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/21111e1f-d004-4b39-b149-d7d470f5c096-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:30 crc kubenswrapper[4860]: I0123 08:40:30.007665 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2zvf\" (UniqueName: \"kubernetes.io/projected/21111e1f-d004-4b39-b149-d7d470f5c096-kube-api-access-b2zvf\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:30 crc kubenswrapper[4860]: I0123 08:40:30.496300 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/21111e1f-d004-4b39-b149-d7d470f5c096-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:30 crc kubenswrapper[4860]: E0123 08:40:30.496439 4860 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 23 08:40:30 crc kubenswrapper[4860]: E0123 08:40:30.496493 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21111e1f-d004-4b39-b149-d7d470f5c096-default-cloud1-ceil-meter-proxy-tls podName:21111e1f-d004-4b39-b149-d7d470f5c096 nodeName:}" failed. No retries permitted until 2026-01-23 08:40:31.49647738 +0000 UTC m=+1538.124527565 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/21111e1f-d004-4b39-b149-d7d470f5c096-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" (UID: "21111e1f-d004-4b39-b149-d7d470f5c096") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 23 08:40:31 crc kubenswrapper[4860]: I0123 08:40:31.529607 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/21111e1f-d004-4b39-b149-d7d470f5c096-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:31 crc kubenswrapper[4860]: I0123 08:40:31.548578 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/21111e1f-d004-4b39-b149-d7d470f5c096-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks\" (UID: \"21111e1f-d004-4b39-b149-d7d470f5c096\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:31 crc kubenswrapper[4860]: I0123 08:40:31.659746 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.096137 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf"] Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.099485 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.102292 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.102618 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.111266 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf"] Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.196570 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks"] Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.196841 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/50c18fce-7fbd-465c-b950-acfef76ac285-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.196941 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/50c18fce-7fbd-465c-b950-acfef76ac285-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.196972 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/50c18fce-7fbd-465c-b950-acfef76ac285-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.196993 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fznrs\" (UniqueName: \"kubernetes.io/projected/50c18fce-7fbd-465c-b950-acfef76ac285-kube-api-access-fznrs\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.197047 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/50c18fce-7fbd-465c-b950-acfef76ac285-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.298652 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/50c18fce-7fbd-465c-b950-acfef76ac285-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.298704 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/50c18fce-7fbd-465c-b950-acfef76ac285-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.298729 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fznrs\" (UniqueName: \"kubernetes.io/projected/50c18fce-7fbd-465c-b950-acfef76ac285-kube-api-access-fznrs\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.298756 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/50c18fce-7fbd-465c-b950-acfef76ac285-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.298824 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/50c18fce-7fbd-465c-b950-acfef76ac285-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:34 crc kubenswrapper[4860]: E0123 08:40:34.299513 4860 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 23 08:40:34 crc kubenswrapper[4860]: E0123 08:40:34.299612 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50c18fce-7fbd-465c-b950-acfef76ac285-default-cloud1-sens-meter-proxy-tls podName:50c18fce-7fbd-465c-b950-acfef76ac285 nodeName:}" failed. No retries permitted until 2026-01-23 08:40:34.799591577 +0000 UTC m=+1541.427641842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/50c18fce-7fbd-465c-b950-acfef76ac285-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" (UID: "50c18fce-7fbd-465c-b950-acfef76ac285") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.300248 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/50c18fce-7fbd-465c-b950-acfef76ac285-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.300373 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/50c18fce-7fbd-465c-b950-acfef76ac285-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.312659 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/50c18fce-7fbd-465c-b950-acfef76ac285-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.317989 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fznrs\" (UniqueName: \"kubernetes.io/projected/50c18fce-7fbd-465c-b950-acfef76ac285-kube-api-access-fznrs\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.787961 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" event={"ID":"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe","Type":"ContainerStarted","Data":"a427ece73fe7dfd4d9fd46c61e553f4da56667e072720678647570f880e8d515"} Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.794850 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"e1ec65d6-ed8f-4008-8d71-ce558a169641","Type":"ContainerStarted","Data":"0cb0848ae3734797d4f19b09cd13c0b24b28f5294eff5103677c96d1827aa26f"} Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.799162 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" event={"ID":"21111e1f-d004-4b39-b149-d7d470f5c096","Type":"ContainerStarted","Data":"0302d21fb3b513eaffa91108b04d8005d033add0e4c75a8b3263261540bfe92d"} Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.822370 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.453668033 podStartE2EDuration="39.822349901s" podCreationTimestamp="2026-01-23 08:39:55 +0000 UTC" firstStartedPulling="2026-01-23 08:39:58.667580355 +0000 UTC m=+1505.295630550" lastFinishedPulling="2026-01-23 08:40:34.036262233 +0000 UTC m=+1540.664312418" observedRunningTime="2026-01-23 08:40:34.817286835 +0000 UTC m=+1541.445337020" watchObservedRunningTime="2026-01-23 08:40:34.822349901 +0000 UTC m=+1541.450400086" Jan 23 08:40:34 crc kubenswrapper[4860]: I0123 08:40:34.846439 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/50c18fce-7fbd-465c-b950-acfef76ac285-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:34 crc kubenswrapper[4860]: E0123 08:40:34.846607 4860 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 23 08:40:34 crc kubenswrapper[4860]: E0123 08:40:34.846656 4860 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50c18fce-7fbd-465c-b950-acfef76ac285-default-cloud1-sens-meter-proxy-tls podName:50c18fce-7fbd-465c-b950-acfef76ac285 nodeName:}" failed. No retries permitted until 2026-01-23 08:40:35.846641821 +0000 UTC m=+1542.474692006 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/50c18fce-7fbd-465c-b950-acfef76ac285-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" (UID: "50c18fce-7fbd-465c-b950-acfef76ac285") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 23 08:40:35 crc kubenswrapper[4860]: I0123 08:40:35.861165 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/50c18fce-7fbd-465c-b950-acfef76ac285-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:35 crc kubenswrapper[4860]: I0123 08:40:35.871209 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/50c18fce-7fbd-465c-b950-acfef76ac285-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf\" (UID: \"50c18fce-7fbd-465c-b950-acfef76ac285\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:35 crc kubenswrapper[4860]: I0123 08:40:35.927075 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" Jan 23 08:40:38 crc kubenswrapper[4860]: I0123 08:40:38.420170 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Jan 23 08:40:40 crc kubenswrapper[4860]: I0123 08:40:40.153870 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf"] Jan 23 08:40:40 crc kubenswrapper[4860]: W0123 08:40:40.171205 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50c18fce_7fbd_465c_b950_acfef76ac285.slice/crio-59b69f85cc377a6652f2922f7131ce45a05481369028bdc494e7fac202405971 WatchSource:0}: Error finding container 59b69f85cc377a6652f2922f7131ce45a05481369028bdc494e7fac202405971: Status 404 returned error can't find the container with id 59b69f85cc377a6652f2922f7131ce45a05481369028bdc494e7fac202405971 Jan 23 08:40:40 crc kubenswrapper[4860]: I0123 08:40:40.866062 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" event={"ID":"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe","Type":"ContainerStarted","Data":"668be1c3dcd4ce7d7700af3428b8228c322775c2de22a577ab1fc28a10ee7b60"} Jan 23 08:40:40 crc kubenswrapper[4860]: I0123 08:40:40.870216 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" event={"ID":"21111e1f-d004-4b39-b149-d7d470f5c096","Type":"ContainerStarted","Data":"461f240a8d85dd9b09f023096153657b06f559617cd91c8d43dbb8593b89dac8"} Jan 23 08:40:40 crc kubenswrapper[4860]: I0123 08:40:40.870256 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" event={"ID":"21111e1f-d004-4b39-b149-d7d470f5c096","Type":"ContainerStarted","Data":"52ae9a478ef8556b24fdfac1fd60b19b40b82b38301ae471d36c722495b9199b"} Jan 23 08:40:40 crc kubenswrapper[4860]: I0123 08:40:40.872861 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f","Type":"ContainerStarted","Data":"7a76f6c9999be91a8f0287a5881e99d01f29eb0992e10e07fd5aa59eec2b7911"} Jan 23 08:40:40 crc kubenswrapper[4860]: I0123 08:40:40.875311 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" event={"ID":"50c18fce-7fbd-465c-b950-acfef76ac285","Type":"ContainerStarted","Data":"59b69f85cc377a6652f2922f7131ce45a05481369028bdc494e7fac202405971"} Jan 23 08:40:40 crc kubenswrapper[4860]: I0123 08:40:40.882304 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-cngd4" event={"ID":"55117c66-88ef-4d9e-9e07-3c228a43a0d8","Type":"ContainerStarted","Data":"e3664673557d0fd94dd95719e52350f016615fbf15eb71bdd097d12d8665ecdb"} Jan 23 08:40:40 crc kubenswrapper[4860]: I0123 08:40:40.896787 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6856cfb745-cngd4" podStartSLOduration=2.268030097 podStartE2EDuration="33.896765349s" podCreationTimestamp="2026-01-23 08:40:07 +0000 UTC" firstStartedPulling="2026-01-23 08:40:08.140479042 +0000 UTC m=+1514.768529227" lastFinishedPulling="2026-01-23 08:40:39.769214294 +0000 UTC m=+1546.397264479" observedRunningTime="2026-01-23 08:40:40.894618797 +0000 UTC m=+1547.522668982" watchObservedRunningTime="2026-01-23 08:40:40.896765349 +0000 UTC m=+1547.524815544" Jan 23 08:40:41 crc kubenswrapper[4860]: I0123 08:40:41.891643 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f","Type":"ContainerStarted","Data":"99eddfbb1b9ecc97a420ae181809f783cc887b83ecdfffd460a2d35b895e9b40"} Jan 23 08:40:41 crc kubenswrapper[4860]: I0123 08:40:41.893844 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" event={"ID":"50c18fce-7fbd-465c-b950-acfef76ac285","Type":"ContainerStarted","Data":"5297c3d5ef5f62d558ac291cea9707226940d2f4083442f8a60fffc882f8226c"} Jan 23 08:40:41 crc kubenswrapper[4860]: I0123 08:40:41.893890 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" event={"ID":"50c18fce-7fbd-465c-b950-acfef76ac285","Type":"ContainerStarted","Data":"3d9a6d2f4c47d66a50464cdbf0106e48a9aae292eada8fa9c017a7b89271d324"} Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.468926 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq"] Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.469892 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.480457 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.480955 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.485995 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq"] Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.559625 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8f71d5ef-0409-48f6-ac20-c4cfd510bac5-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq\" (UID: \"8f71d5ef-0409-48f6-ac20-c4cfd510bac5\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.559725 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8f71d5ef-0409-48f6-ac20-c4cfd510bac5-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq\" (UID: \"8f71d5ef-0409-48f6-ac20-c4cfd510bac5\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.559751 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pprt2\" (UniqueName: \"kubernetes.io/projected/8f71d5ef-0409-48f6-ac20-c4cfd510bac5-kube-api-access-pprt2\") pod \"default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq\" (UID: \"8f71d5ef-0409-48f6-ac20-c4cfd510bac5\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.559769 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/8f71d5ef-0409-48f6-ac20-c4cfd510bac5-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq\" (UID: \"8f71d5ef-0409-48f6-ac20-c4cfd510bac5\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.660683 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8f71d5ef-0409-48f6-ac20-c4cfd510bac5-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq\" (UID: \"8f71d5ef-0409-48f6-ac20-c4cfd510bac5\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.660726 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pprt2\" (UniqueName: \"kubernetes.io/projected/8f71d5ef-0409-48f6-ac20-c4cfd510bac5-kube-api-access-pprt2\") pod \"default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq\" (UID: \"8f71d5ef-0409-48f6-ac20-c4cfd510bac5\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.660753 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/8f71d5ef-0409-48f6-ac20-c4cfd510bac5-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq\" (UID: \"8f71d5ef-0409-48f6-ac20-c4cfd510bac5\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.660792 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8f71d5ef-0409-48f6-ac20-c4cfd510bac5-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq\" (UID: \"8f71d5ef-0409-48f6-ac20-c4cfd510bac5\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.661385 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8f71d5ef-0409-48f6-ac20-c4cfd510bac5-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq\" (UID: \"8f71d5ef-0409-48f6-ac20-c4cfd510bac5\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.661705 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8f71d5ef-0409-48f6-ac20-c4cfd510bac5-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq\" (UID: \"8f71d5ef-0409-48f6-ac20-c4cfd510bac5\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.670836 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/8f71d5ef-0409-48f6-ac20-c4cfd510bac5-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq\" (UID: \"8f71d5ef-0409-48f6-ac20-c4cfd510bac5\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.679161 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pprt2\" (UniqueName: \"kubernetes.io/projected/8f71d5ef-0409-48f6-ac20-c4cfd510bac5-kube-api-access-pprt2\") pod \"default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq\" (UID: \"8f71d5ef-0409-48f6-ac20-c4cfd510bac5\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" Jan 23 08:40:42 crc kubenswrapper[4860]: I0123 08:40:42.801651 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" Jan 23 08:40:43 crc kubenswrapper[4860]: I0123 08:40:43.420173 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Jan 23 08:40:43 crc kubenswrapper[4860]: I0123 08:40:43.469856 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Jan 23 08:40:43 crc kubenswrapper[4860]: I0123 08:40:43.939528 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.533586 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4"] Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.535817 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.538223 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.541355 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4"] Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.616911 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a5e86732-0630-4800-a005-750406521136-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4\" (UID: \"a5e86732-0630-4800-a005-750406521136\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.617004 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/a5e86732-0630-4800-a005-750406521136-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4\" (UID: \"a5e86732-0630-4800-a005-750406521136\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.617053 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a5e86732-0630-4800-a005-750406521136-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4\" (UID: \"a5e86732-0630-4800-a005-750406521136\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.617127 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm4w4\" (UniqueName: \"kubernetes.io/projected/a5e86732-0630-4800-a005-750406521136-kube-api-access-zm4w4\") pod \"default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4\" (UID: \"a5e86732-0630-4800-a005-750406521136\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.718340 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm4w4\" (UniqueName: \"kubernetes.io/projected/a5e86732-0630-4800-a005-750406521136-kube-api-access-zm4w4\") pod \"default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4\" (UID: \"a5e86732-0630-4800-a005-750406521136\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.718459 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a5e86732-0630-4800-a005-750406521136-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4\" (UID: \"a5e86732-0630-4800-a005-750406521136\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.718538 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/a5e86732-0630-4800-a005-750406521136-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4\" (UID: \"a5e86732-0630-4800-a005-750406521136\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.718565 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a5e86732-0630-4800-a005-750406521136-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4\" (UID: \"a5e86732-0630-4800-a005-750406521136\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.719270 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a5e86732-0630-4800-a005-750406521136-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4\" (UID: \"a5e86732-0630-4800-a005-750406521136\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.719749 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a5e86732-0630-4800-a005-750406521136-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4\" (UID: \"a5e86732-0630-4800-a005-750406521136\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.737988 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/a5e86732-0630-4800-a005-750406521136-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4\" (UID: \"a5e86732-0630-4800-a005-750406521136\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.739237 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm4w4\" (UniqueName: \"kubernetes.io/projected/a5e86732-0630-4800-a005-750406521136-kube-api-access-zm4w4\") pod \"default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4\" (UID: \"a5e86732-0630-4800-a005-750406521136\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" Jan 23 08:40:46 crc kubenswrapper[4860]: I0123 08:40:46.855666 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" Jan 23 08:40:52 crc kubenswrapper[4860]: I0123 08:40:52.172408 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq"] Jan 23 08:40:52 crc kubenswrapper[4860]: W0123 08:40:52.182229 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f71d5ef_0409_48f6_ac20_c4cfd510bac5.slice/crio-1864d873816c2f2ab345a3f632d0666a356a4cabc475664a74690e7123e72932 WatchSource:0}: Error finding container 1864d873816c2f2ab345a3f632d0666a356a4cabc475664a74690e7123e72932: Status 404 returned error can't find the container with id 1864d873816c2f2ab345a3f632d0666a356a4cabc475664a74690e7123e72932 Jan 23 08:40:52 crc kubenswrapper[4860]: I0123 08:40:52.308553 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4"] Jan 23 08:40:52 crc kubenswrapper[4860]: W0123 08:40:52.312004 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5e86732_0630_4800_a005_750406521136.slice/crio-63d9f5fc0439a9e46a7032e7c4f47464d474aef599a9eb801c81b72b8076b608 WatchSource:0}: Error finding container 63d9f5fc0439a9e46a7032e7c4f47464d474aef599a9eb801c81b72b8076b608: Status 404 returned error can't find the container with id 63d9f5fc0439a9e46a7032e7c4f47464d474aef599a9eb801c81b72b8076b608 Jan 23 08:40:52 crc kubenswrapper[4860]: I0123 08:40:52.969137 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" event={"ID":"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe","Type":"ContainerStarted","Data":"8a59dd3e6dcef5a33ad024e9d18ef5f38fa415d8da4a4d34524688087f78840a"} Jan 23 08:40:52 crc kubenswrapper[4860]: I0123 08:40:52.972375 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" event={"ID":"8f71d5ef-0409-48f6-ac20-c4cfd510bac5","Type":"ContainerStarted","Data":"93fbd23cae89528241146e7529db72b12afaa0902c730fc19103b0fcdf63f452"} Jan 23 08:40:52 crc kubenswrapper[4860]: I0123 08:40:52.972445 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" event={"ID":"8f71d5ef-0409-48f6-ac20-c4cfd510bac5","Type":"ContainerStarted","Data":"6f2649b2b25c26b1d00da537871d508c1402c385bd517e28f9a3a820b94aa72d"} Jan 23 08:40:52 crc kubenswrapper[4860]: I0123 08:40:52.972473 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" event={"ID":"8f71d5ef-0409-48f6-ac20-c4cfd510bac5","Type":"ContainerStarted","Data":"1864d873816c2f2ab345a3f632d0666a356a4cabc475664a74690e7123e72932"} Jan 23 08:40:52 crc kubenswrapper[4860]: I0123 08:40:52.975478 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" event={"ID":"21111e1f-d004-4b39-b149-d7d470f5c096","Type":"ContainerStarted","Data":"d6245f8b2f6b1119833a8369a0583f80ab02b1ee69cd221c14167db29dd87413"} Jan 23 08:40:52 crc kubenswrapper[4860]: I0123 08:40:52.979103 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f","Type":"ContainerStarted","Data":"a45d8d76f1d72034d2f4753126f4840deab5ec17968c3e772cdff59d8b5f4504"} Jan 23 08:40:52 crc kubenswrapper[4860]: I0123 08:40:52.981527 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" event={"ID":"50c18fce-7fbd-465c-b950-acfef76ac285","Type":"ContainerStarted","Data":"2a2af29048f721b441bab3f4a357c6edb48fe05a43496c939c53c7d27f25e196"} Jan 23 08:40:52 crc kubenswrapper[4860]: I0123 08:40:52.983324 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" event={"ID":"a5e86732-0630-4800-a005-750406521136","Type":"ContainerStarted","Data":"6d4e5b210b0a845a76abc0edca4abff76a756f3c363dd8a5b357f6605d5655ef"} Jan 23 08:40:52 crc kubenswrapper[4860]: I0123 08:40:52.983393 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" event={"ID":"a5e86732-0630-4800-a005-750406521136","Type":"ContainerStarted","Data":"d91f3ebebc36ef9a9d13357d36098a80eec99c3f1cf3fe87032716e5a9e45988"} Jan 23 08:40:52 crc kubenswrapper[4860]: I0123 08:40:52.983410 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" event={"ID":"a5e86732-0630-4800-a005-750406521136","Type":"ContainerStarted","Data":"63d9f5fc0439a9e46a7032e7c4f47464d474aef599a9eb801c81b72b8076b608"} Jan 23 08:40:53 crc kubenswrapper[4860]: I0123 08:40:52.998133 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" podStartSLOduration=3.990984224 podStartE2EDuration="26.998101812s" podCreationTimestamp="2026-01-23 08:40:26 +0000 UTC" firstStartedPulling="2026-01-23 08:40:29.00819752 +0000 UTC m=+1535.636247705" lastFinishedPulling="2026-01-23 08:40:52.015315108 +0000 UTC m=+1558.643365293" observedRunningTime="2026-01-23 08:40:52.991732345 +0000 UTC m=+1559.619782530" watchObservedRunningTime="2026-01-23 08:40:52.998101812 +0000 UTC m=+1559.626151997" Jan 23 08:40:53 crc kubenswrapper[4860]: I0123 08:40:53.023513 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" podStartSLOduration=10.697796092 podStartE2EDuration="11.0234916s" podCreationTimestamp="2026-01-23 08:40:42 +0000 UTC" firstStartedPulling="2026-01-23 08:40:52.183619752 +0000 UTC m=+1558.811669937" lastFinishedPulling="2026-01-23 08:40:52.50931526 +0000 UTC m=+1559.137365445" observedRunningTime="2026-01-23 08:40:53.007910955 +0000 UTC m=+1559.635961140" watchObservedRunningTime="2026-01-23 08:40:53.0234916 +0000 UTC m=+1559.651541775" Jan 23 08:40:53 crc kubenswrapper[4860]: I0123 08:40:53.118287 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=20.852958099 podStartE2EDuration="43.118264535s" podCreationTimestamp="2026-01-23 08:40:10 +0000 UTC" firstStartedPulling="2026-01-23 08:40:29.747987843 +0000 UTC m=+1536.376038028" lastFinishedPulling="2026-01-23 08:40:52.013294269 +0000 UTC m=+1558.641344464" observedRunningTime="2026-01-23 08:40:53.115567618 +0000 UTC m=+1559.743617813" watchObservedRunningTime="2026-01-23 08:40:53.118264535 +0000 UTC m=+1559.746314720" Jan 23 08:40:53 crc kubenswrapper[4860]: I0123 08:40:53.118699 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" podStartSLOduration=7.221161265 podStartE2EDuration="19.118691875s" podCreationTimestamp="2026-01-23 08:40:34 +0000 UTC" firstStartedPulling="2026-01-23 08:40:40.176688875 +0000 UTC m=+1546.804739060" lastFinishedPulling="2026-01-23 08:40:52.074219485 +0000 UTC m=+1558.702269670" observedRunningTime="2026-01-23 08:40:53.049133685 +0000 UTC m=+1559.677183900" watchObservedRunningTime="2026-01-23 08:40:53.118691875 +0000 UTC m=+1559.746742060" Jan 23 08:40:53 crc kubenswrapper[4860]: I0123 08:40:53.133649 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" podStartSLOduration=6.335450524 podStartE2EDuration="24.133632585s" podCreationTimestamp="2026-01-23 08:40:29 +0000 UTC" firstStartedPulling="2026-01-23 08:40:34.216489062 +0000 UTC m=+1540.844539247" lastFinishedPulling="2026-01-23 08:40:52.014671123 +0000 UTC m=+1558.642721308" observedRunningTime="2026-01-23 08:40:53.132903157 +0000 UTC m=+1559.760953342" watchObservedRunningTime="2026-01-23 08:40:53.133632585 +0000 UTC m=+1559.761682770" Jan 23 08:40:53 crc kubenswrapper[4860]: I0123 08:40:53.157520 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" podStartSLOduration=6.882016501 podStartE2EDuration="7.157497006s" podCreationTimestamp="2026-01-23 08:40:46 +0000 UTC" firstStartedPulling="2026-01-23 08:40:52.314728646 +0000 UTC m=+1558.942778831" lastFinishedPulling="2026-01-23 08:40:52.590209151 +0000 UTC m=+1559.218259336" observedRunningTime="2026-01-23 08:40:53.156147252 +0000 UTC m=+1559.784197437" watchObservedRunningTime="2026-01-23 08:40:53.157497006 +0000 UTC m=+1559.785547201" Jan 23 08:40:59 crc kubenswrapper[4860]: I0123 08:40:59.735263 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zhhlg"] Jan 23 08:40:59 crc kubenswrapper[4860]: I0123 08:40:59.735942 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" podUID="a45edd37-9f00-44b2-be28-7891a2ee264e" containerName="default-interconnect" containerID="cri-o://df257c32859d766055541f74fe1790c8a9922f0989279cd1f31358512550138f" gracePeriod=30 Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.034074 4860 generic.go:334] "Generic (PLEG): container finished" podID="a45edd37-9f00-44b2-be28-7891a2ee264e" containerID="df257c32859d766055541f74fe1790c8a9922f0989279cd1f31358512550138f" exitCode=0 Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.034116 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" event={"ID":"a45edd37-9f00-44b2-be28-7891a2ee264e","Type":"ContainerDied","Data":"df257c32859d766055541f74fe1790c8a9922f0989279cd1f31358512550138f"} Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.634925 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.748278 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a45edd37-9f00-44b2-be28-7891a2ee264e-sasl-config\") pod \"a45edd37-9f00-44b2-be28-7891a2ee264e\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.748327 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-openstack-ca\") pod \"a45edd37-9f00-44b2-be28-7891a2ee264e\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.748354 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-inter-router-credentials\") pod \"a45edd37-9f00-44b2-be28-7891a2ee264e\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.748377 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-inter-router-ca\") pod \"a45edd37-9f00-44b2-be28-7891a2ee264e\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.748407 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-sasl-users\") pod \"a45edd37-9f00-44b2-be28-7891a2ee264e\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.748494 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqgbn\" (UniqueName: \"kubernetes.io/projected/a45edd37-9f00-44b2-be28-7891a2ee264e-kube-api-access-bqgbn\") pod \"a45edd37-9f00-44b2-be28-7891a2ee264e\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.748563 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-openstack-credentials\") pod \"a45edd37-9f00-44b2-be28-7891a2ee264e\" (UID: \"a45edd37-9f00-44b2-be28-7891a2ee264e\") " Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.749432 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a45edd37-9f00-44b2-be28-7891a2ee264e-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "a45edd37-9f00-44b2-be28-7891a2ee264e" (UID: "a45edd37-9f00-44b2-be28-7891a2ee264e"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.755028 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "a45edd37-9f00-44b2-be28-7891a2ee264e" (UID: "a45edd37-9f00-44b2-be28-7891a2ee264e"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.755511 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "a45edd37-9f00-44b2-be28-7891a2ee264e" (UID: "a45edd37-9f00-44b2-be28-7891a2ee264e"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.757240 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a45edd37-9f00-44b2-be28-7891a2ee264e-kube-api-access-bqgbn" (OuterVolumeSpecName: "kube-api-access-bqgbn") pod "a45edd37-9f00-44b2-be28-7891a2ee264e" (UID: "a45edd37-9f00-44b2-be28-7891a2ee264e"). InnerVolumeSpecName "kube-api-access-bqgbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.757244 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "a45edd37-9f00-44b2-be28-7891a2ee264e" (UID: "a45edd37-9f00-44b2-be28-7891a2ee264e"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.757365 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "a45edd37-9f00-44b2-be28-7891a2ee264e" (UID: "a45edd37-9f00-44b2-be28-7891a2ee264e"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.766284 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "a45edd37-9f00-44b2-be28-7891a2ee264e" (UID: "a45edd37-9f00-44b2-be28-7891a2ee264e"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.850272 4860 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.850314 4860 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a45edd37-9f00-44b2-be28-7891a2ee264e-sasl-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.850328 4860 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.850341 4860 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.850353 4860 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.850365 4860 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a45edd37-9f00-44b2-be28-7891a2ee264e-sasl-users\") on node \"crc\" DevicePath \"\"" Jan 23 08:41:00 crc kubenswrapper[4860]: I0123 08:41:00.850374 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqgbn\" (UniqueName: \"kubernetes.io/projected/a45edd37-9f00-44b2-be28-7891a2ee264e-kube-api-access-bqgbn\") on node \"crc\" DevicePath \"\"" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.042136 4860 generic.go:334] "Generic (PLEG): container finished" podID="50c18fce-7fbd-465c-b950-acfef76ac285" containerID="3d9a6d2f4c47d66a50464cdbf0106e48a9aae292eada8fa9c017a7b89271d324" exitCode=0 Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.042214 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" event={"ID":"50c18fce-7fbd-465c-b950-acfef76ac285","Type":"ContainerDied","Data":"3d9a6d2f4c47d66a50464cdbf0106e48a9aae292eada8fa9c017a7b89271d324"} Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.042731 4860 scope.go:117] "RemoveContainer" containerID="3d9a6d2f4c47d66a50464cdbf0106e48a9aae292eada8fa9c017a7b89271d324" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.043827 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.044097 4860 generic.go:334] "Generic (PLEG): container finished" podID="a5e86732-0630-4800-a005-750406521136" containerID="d91f3ebebc36ef9a9d13357d36098a80eec99c3f1cf3fe87032716e5a9e45988" exitCode=0 Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.044171 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" event={"ID":"a5e86732-0630-4800-a005-750406521136","Type":"ContainerDied","Data":"d91f3ebebc36ef9a9d13357d36098a80eec99c3f1cf3fe87032716e5a9e45988"} Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.044754 4860 scope.go:117] "RemoveContainer" containerID="d91f3ebebc36ef9a9d13357d36098a80eec99c3f1cf3fe87032716e5a9e45988" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.046500 4860 generic.go:334] "Generic (PLEG): container finished" podID="e79ec9dc-fb1a-4807-815d-3a74e95f0ffe" containerID="668be1c3dcd4ce7d7700af3428b8228c322775c2de22a577ab1fc28a10ee7b60" exitCode=0 Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.046572 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" event={"ID":"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe","Type":"ContainerDied","Data":"668be1c3dcd4ce7d7700af3428b8228c322775c2de22a577ab1fc28a10ee7b60"} Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.047005 4860 scope.go:117] "RemoveContainer" containerID="668be1c3dcd4ce7d7700af3428b8228c322775c2de22a577ab1fc28a10ee7b60" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.049510 4860 generic.go:334] "Generic (PLEG): container finished" podID="8f71d5ef-0409-48f6-ac20-c4cfd510bac5" containerID="6f2649b2b25c26b1d00da537871d508c1402c385bd517e28f9a3a820b94aa72d" exitCode=0 Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.049581 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" event={"ID":"8f71d5ef-0409-48f6-ac20-c4cfd510bac5","Type":"ContainerDied","Data":"6f2649b2b25c26b1d00da537871d508c1402c385bd517e28f9a3a820b94aa72d"} Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.050164 4860 scope.go:117] "RemoveContainer" containerID="6f2649b2b25c26b1d00da537871d508c1402c385bd517e28f9a3a820b94aa72d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.051953 4860 generic.go:334] "Generic (PLEG): container finished" podID="21111e1f-d004-4b39-b149-d7d470f5c096" containerID="461f240a8d85dd9b09f023096153657b06f559617cd91c8d43dbb8593b89dac8" exitCode=0 Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.052768 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" event={"ID":"21111e1f-d004-4b39-b149-d7d470f5c096","Type":"ContainerDied","Data":"461f240a8d85dd9b09f023096153657b06f559617cd91c8d43dbb8593b89dac8"} Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.053196 4860 scope.go:117] "RemoveContainer" containerID="461f240a8d85dd9b09f023096153657b06f559617cd91c8d43dbb8593b89dac8" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.053885 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" event={"ID":"a45edd37-9f00-44b2-be28-7891a2ee264e","Type":"ContainerDied","Data":"88e5517dc2b72f5e91a2eebf9316abd64238fd27e75611f621513a0567ab9a66"} Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.053935 4860 scope.go:117] "RemoveContainer" containerID="df257c32859d766055541f74fe1790c8a9922f0989279cd1f31358512550138f" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.054007 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-zhhlg" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.189978 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zhhlg"] Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.195179 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-zhhlg"] Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.668496 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a45edd37-9f00-44b2-be28-7891a2ee264e" path="/var/lib/kubelet/pods/a45edd37-9f00-44b2-be28-7891a2ee264e/volumes" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.680765 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-d744d"] Jan 23 08:41:01 crc kubenswrapper[4860]: E0123 08:41:01.681058 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a45edd37-9f00-44b2-be28-7891a2ee264e" containerName="default-interconnect" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.681075 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45edd37-9f00-44b2-be28-7891a2ee264e" containerName="default-interconnect" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.681183 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="a45edd37-9f00-44b2-be28-7891a2ee264e" containerName="default-interconnect" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.681591 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.683610 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.684291 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.684798 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.684812 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.685048 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.685203 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.685499 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-wxx6s" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.693623 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-d744d"] Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.762302 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-sasl-users\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.762509 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.762612 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.762769 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.762802 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.762830 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brhfw\" (UniqueName: \"kubernetes.io/projected/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-kube-api-access-brhfw\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.762927 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-sasl-config\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.864905 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-sasl-users\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.865006 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.865115 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.865163 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.865205 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.865251 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brhfw\" (UniqueName: \"kubernetes.io/projected/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-kube-api-access-brhfw\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.865338 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-sasl-config\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.867426 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-sasl-config\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.870309 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.870844 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.871280 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-sasl-users\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.872670 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.876281 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:01 crc kubenswrapper[4860]: I0123 08:41:01.885331 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brhfw\" (UniqueName: \"kubernetes.io/projected/163502a6-c7b9-4a7f-af7e-3bc7b1d066e3-kube-api-access-brhfw\") pod \"default-interconnect-68864d46cb-d744d\" (UID: \"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3\") " pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:02 crc kubenswrapper[4860]: I0123 08:41:02.006152 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-d744d" Jan 23 08:41:02 crc kubenswrapper[4860]: I0123 08:41:02.063162 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" event={"ID":"21111e1f-d004-4b39-b149-d7d470f5c096","Type":"ContainerStarted","Data":"462ba0c28392c7cdd1a31a9bff39ea11524443ec39402b6ec8309c878c908d85"} Jan 23 08:41:02 crc kubenswrapper[4860]: I0123 08:41:02.067166 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" event={"ID":"50c18fce-7fbd-465c-b950-acfef76ac285","Type":"ContainerStarted","Data":"5d4e2d22c3858e6347a261e6b2618faba861a3959e1e86adfea74a4d705f7aef"} Jan 23 08:41:02 crc kubenswrapper[4860]: I0123 08:41:02.070870 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" event={"ID":"a5e86732-0630-4800-a005-750406521136","Type":"ContainerStarted","Data":"bf87092cbc8efb1fd7a4f62bbf963bcc7cec9b83b84d790565e67c3a0ab01e1e"} Jan 23 08:41:02 crc kubenswrapper[4860]: I0123 08:41:02.074796 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" event={"ID":"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe","Type":"ContainerStarted","Data":"3b2314547d04fb11862ad2e35fcb4667f5197a086255de5f344ddcfc15479ab7"} Jan 23 08:41:02 crc kubenswrapper[4860]: I0123 08:41:02.076778 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" event={"ID":"8f71d5ef-0409-48f6-ac20-c4cfd510bac5","Type":"ContainerStarted","Data":"e180b69f8340f645045caf0e8b1a30e9f60b868fec1e80ebe0cd530c12418ae5"} Jan 23 08:41:02 crc kubenswrapper[4860]: I0123 08:41:02.471367 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-d744d"] Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.089738 4860 generic.go:334] "Generic (PLEG): container finished" podID="e79ec9dc-fb1a-4807-815d-3a74e95f0ffe" containerID="3b2314547d04fb11862ad2e35fcb4667f5197a086255de5f344ddcfc15479ab7" exitCode=0 Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.089831 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" event={"ID":"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe","Type":"ContainerDied","Data":"3b2314547d04fb11862ad2e35fcb4667f5197a086255de5f344ddcfc15479ab7"} Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.090419 4860 scope.go:117] "RemoveContainer" containerID="668be1c3dcd4ce7d7700af3428b8228c322775c2de22a577ab1fc28a10ee7b60" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.091327 4860 scope.go:117] "RemoveContainer" containerID="3b2314547d04fb11862ad2e35fcb4667f5197a086255de5f344ddcfc15479ab7" Jan 23 08:41:03 crc kubenswrapper[4860]: E0123 08:41:03.091573 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7_service-telemetry(e79ec9dc-fb1a-4807-815d-3a74e95f0ffe)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" podUID="e79ec9dc-fb1a-4807-815d-3a74e95f0ffe" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.100643 4860 generic.go:334] "Generic (PLEG): container finished" podID="8f71d5ef-0409-48f6-ac20-c4cfd510bac5" containerID="e180b69f8340f645045caf0e8b1a30e9f60b868fec1e80ebe0cd530c12418ae5" exitCode=0 Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.100730 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" event={"ID":"8f71d5ef-0409-48f6-ac20-c4cfd510bac5","Type":"ContainerDied","Data":"e180b69f8340f645045caf0e8b1a30e9f60b868fec1e80ebe0cd530c12418ae5"} Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.101233 4860 scope.go:117] "RemoveContainer" containerID="e180b69f8340f645045caf0e8b1a30e9f60b868fec1e80ebe0cd530c12418ae5" Jan 23 08:41:03 crc kubenswrapper[4860]: E0123 08:41:03.101460 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq_service-telemetry(8f71d5ef-0409-48f6-ac20-c4cfd510bac5)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" podUID="8f71d5ef-0409-48f6-ac20-c4cfd510bac5" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.104708 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-d744d" event={"ID":"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3","Type":"ContainerStarted","Data":"034edba745803ef6a32a6c3f773909a669c74790cff5137914396393637406c8"} Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.104755 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-d744d" event={"ID":"163502a6-c7b9-4a7f-af7e-3bc7b1d066e3","Type":"ContainerStarted","Data":"00168dd0ba37ce041db0a5409b42274f67c96420bb94000c84712e1d8cba8344"} Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.110517 4860 generic.go:334] "Generic (PLEG): container finished" podID="21111e1f-d004-4b39-b149-d7d470f5c096" containerID="462ba0c28392c7cdd1a31a9bff39ea11524443ec39402b6ec8309c878c908d85" exitCode=0 Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.110613 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" event={"ID":"21111e1f-d004-4b39-b149-d7d470f5c096","Type":"ContainerDied","Data":"462ba0c28392c7cdd1a31a9bff39ea11524443ec39402b6ec8309c878c908d85"} Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.111456 4860 scope.go:117] "RemoveContainer" containerID="462ba0c28392c7cdd1a31a9bff39ea11524443ec39402b6ec8309c878c908d85" Jan 23 08:41:03 crc kubenswrapper[4860]: E0123 08:41:03.111637 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks_service-telemetry(21111e1f-d004-4b39-b149-d7d470f5c096)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" podUID="21111e1f-d004-4b39-b149-d7d470f5c096" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.134104 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-d744d" podStartSLOduration=4.134077223 podStartE2EDuration="4.134077223s" podCreationTimestamp="2026-01-23 08:40:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:41:03.131082739 +0000 UTC m=+1569.759132924" watchObservedRunningTime="2026-01-23 08:41:03.134077223 +0000 UTC m=+1569.762127418" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.136397 4860 generic.go:334] "Generic (PLEG): container finished" podID="50c18fce-7fbd-465c-b950-acfef76ac285" containerID="5d4e2d22c3858e6347a261e6b2618faba861a3959e1e86adfea74a4d705f7aef" exitCode=0 Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.136525 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" event={"ID":"50c18fce-7fbd-465c-b950-acfef76ac285","Type":"ContainerDied","Data":"5d4e2d22c3858e6347a261e6b2618faba861a3959e1e86adfea74a4d705f7aef"} Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.137087 4860 scope.go:117] "RemoveContainer" containerID="5d4e2d22c3858e6347a261e6b2618faba861a3959e1e86adfea74a4d705f7aef" Jan 23 08:41:03 crc kubenswrapper[4860]: E0123 08:41:03.137265 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf_service-telemetry(50c18fce-7fbd-465c-b950-acfef76ac285)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" podUID="50c18fce-7fbd-465c-b950-acfef76ac285" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.151681 4860 scope.go:117] "RemoveContainer" containerID="6f2649b2b25c26b1d00da537871d508c1402c385bd517e28f9a3a820b94aa72d" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.169309 4860 generic.go:334] "Generic (PLEG): container finished" podID="a5e86732-0630-4800-a005-750406521136" containerID="bf87092cbc8efb1fd7a4f62bbf963bcc7cec9b83b84d790565e67c3a0ab01e1e" exitCode=0 Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.169368 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" event={"ID":"a5e86732-0630-4800-a005-750406521136","Type":"ContainerDied","Data":"bf87092cbc8efb1fd7a4f62bbf963bcc7cec9b83b84d790565e67c3a0ab01e1e"} Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.170281 4860 scope.go:117] "RemoveContainer" containerID="bf87092cbc8efb1fd7a4f62bbf963bcc7cec9b83b84d790565e67c3a0ab01e1e" Jan 23 08:41:03 crc kubenswrapper[4860]: E0123 08:41:03.170562 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4_service-telemetry(a5e86732-0630-4800-a005-750406521136)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" podUID="a5e86732-0630-4800-a005-750406521136" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.338819 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.340185 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.360085 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.362059 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.364852 4860 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.398788 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6\") " pod="service-telemetry/qdr-test" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.398905 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8hsl\" (UniqueName: \"kubernetes.io/projected/2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6-kube-api-access-k8hsl\") pod \"qdr-test\" (UID: \"2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6\") " pod="service-telemetry/qdr-test" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.398936 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6-qdr-test-config\") pod \"qdr-test\" (UID: \"2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6\") " pod="service-telemetry/qdr-test" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.494448 4860 scope.go:117] "RemoveContainer" containerID="461f240a8d85dd9b09f023096153657b06f559617cd91c8d43dbb8593b89dac8" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.499717 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6\") " pod="service-telemetry/qdr-test" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.499772 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8hsl\" (UniqueName: \"kubernetes.io/projected/2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6-kube-api-access-k8hsl\") pod \"qdr-test\" (UID: \"2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6\") " pod="service-telemetry/qdr-test" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.499798 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6-qdr-test-config\") pod \"qdr-test\" (UID: \"2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6\") " pod="service-telemetry/qdr-test" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.501510 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6-qdr-test-config\") pod \"qdr-test\" (UID: \"2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6\") " pod="service-telemetry/qdr-test" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.507204 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6\") " pod="service-telemetry/qdr-test" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.524302 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8hsl\" (UniqueName: \"kubernetes.io/projected/2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6-kube-api-access-k8hsl\") pod \"qdr-test\" (UID: \"2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6\") " pod="service-telemetry/qdr-test" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.684234 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.689172 4860 scope.go:117] "RemoveContainer" containerID="3d9a6d2f4c47d66a50464cdbf0106e48a9aae292eada8fa9c017a7b89271d324" Jan 23 08:41:03 crc kubenswrapper[4860]: I0123 08:41:03.782519 4860 scope.go:117] "RemoveContainer" containerID="d91f3ebebc36ef9a9d13357d36098a80eec99c3f1cf3fe87032716e5a9e45988" Jan 23 08:41:04 crc kubenswrapper[4860]: I0123 08:41:04.179315 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 23 08:41:04 crc kubenswrapper[4860]: W0123 08:41:04.185467 4860 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dd093f5_013c_45f6_8d5a_3a1b09dbc0c6.slice/crio-ecc31b709f36d3fb5ad541d6804868a3ec561213da275d291e8d9a47048cfdaa WatchSource:0}: Error finding container ecc31b709f36d3fb5ad541d6804868a3ec561213da275d291e8d9a47048cfdaa: Status 404 returned error can't find the container with id ecc31b709f36d3fb5ad541d6804868a3ec561213da275d291e8d9a47048cfdaa Jan 23 08:41:05 crc kubenswrapper[4860]: I0123 08:41:05.197994 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6","Type":"ContainerStarted","Data":"ecc31b709f36d3fb5ad541d6804868a3ec561213da275d291e8d9a47048cfdaa"} Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.265670 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6","Type":"ContainerStarted","Data":"3b51271bcaff73d3fa35c74e0865df0130aac099c6d460d2b82b6f32f18dd3e1"} Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.285993 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.4245576 podStartE2EDuration="9.285969488s" podCreationTimestamp="2026-01-23 08:41:03 +0000 UTC" firstStartedPulling="2026-01-23 08:41:04.187475304 +0000 UTC m=+1570.815525489" lastFinishedPulling="2026-01-23 08:41:12.048887192 +0000 UTC m=+1578.676937377" observedRunningTime="2026-01-23 08:41:12.279705382 +0000 UTC m=+1578.907755567" watchObservedRunningTime="2026-01-23 08:41:12.285969488 +0000 UTC m=+1578.914019673" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.647233 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-qhvxd"] Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.648556 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.651110 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.652229 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.652401 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.652522 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.652839 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.657617 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.658331 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-qhvxd"] Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.785592 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-sensubility-config\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.786493 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-collectd-config\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.786526 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-healthcheck-log\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.786589 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-ceilometer-publisher\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.786611 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djwfp\" (UniqueName: \"kubernetes.io/projected/95d017f1-dc34-4fee-adc1-081a6969152c-kube-api-access-djwfp\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.786776 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.786834 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.888541 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.888640 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.888692 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-sensubility-config\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.888719 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-collectd-config\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.888745 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-healthcheck-log\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.888763 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-ceilometer-publisher\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.888790 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djwfp\" (UniqueName: \"kubernetes.io/projected/95d017f1-dc34-4fee-adc1-081a6969152c-kube-api-access-djwfp\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.889692 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.890727 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.890770 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-collectd-config\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.891576 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-healthcheck-log\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.891767 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-sensubility-config\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.903136 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-ceilometer-publisher\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.919169 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djwfp\" (UniqueName: \"kubernetes.io/projected/95d017f1-dc34-4fee-adc1-081a6969152c-kube-api-access-djwfp\") pod \"stf-smoketest-smoke1-qhvxd\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:12 crc kubenswrapper[4860]: I0123 08:41:12.968223 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:41:13 crc kubenswrapper[4860]: I0123 08:41:13.076483 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Jan 23 08:41:13 crc kubenswrapper[4860]: I0123 08:41:13.077688 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 23 08:41:13 crc kubenswrapper[4860]: I0123 08:41:13.087104 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 23 08:41:13 crc kubenswrapper[4860]: I0123 08:41:13.194403 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lwjc\" (UniqueName: \"kubernetes.io/projected/aeb71573-bc5e-495f-80ed-ed870a231d05-kube-api-access-5lwjc\") pod \"curl\" (UID: \"aeb71573-bc5e-495f-80ed-ed870a231d05\") " pod="service-telemetry/curl" Jan 23 08:41:13 crc kubenswrapper[4860]: I0123 08:41:13.219534 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-qhvxd"] Jan 23 08:41:13 crc kubenswrapper[4860]: I0123 08:41:13.275167 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qhvxd" event={"ID":"95d017f1-dc34-4fee-adc1-081a6969152c","Type":"ContainerStarted","Data":"a0b5b2f786c7a5830cb026f47c1139949ec38329243119684c3f66b6d55f50a6"} Jan 23 08:41:13 crc kubenswrapper[4860]: I0123 08:41:13.295918 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lwjc\" (UniqueName: \"kubernetes.io/projected/aeb71573-bc5e-495f-80ed-ed870a231d05-kube-api-access-5lwjc\") pod \"curl\" (UID: \"aeb71573-bc5e-495f-80ed-ed870a231d05\") " pod="service-telemetry/curl" Jan 23 08:41:13 crc kubenswrapper[4860]: I0123 08:41:13.317993 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lwjc\" (UniqueName: \"kubernetes.io/projected/aeb71573-bc5e-495f-80ed-ed870a231d05-kube-api-access-5lwjc\") pod \"curl\" (UID: \"aeb71573-bc5e-495f-80ed-ed870a231d05\") " pod="service-telemetry/curl" Jan 23 08:41:13 crc kubenswrapper[4860]: I0123 08:41:13.413703 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 23 08:41:13 crc kubenswrapper[4860]: I0123 08:41:13.656813 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 23 08:41:14 crc kubenswrapper[4860]: I0123 08:41:14.281673 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"aeb71573-bc5e-495f-80ed-ed870a231d05","Type":"ContainerStarted","Data":"330ff050d146c5975a1348f402af5fc253d17b4409c947d94f601427ddbc8937"} Jan 23 08:41:14 crc kubenswrapper[4860]: I0123 08:41:14.657280 4860 scope.go:117] "RemoveContainer" containerID="e180b69f8340f645045caf0e8b1a30e9f60b868fec1e80ebe0cd530c12418ae5" Jan 23 08:41:16 crc kubenswrapper[4860]: I0123 08:41:16.297753 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq" event={"ID":"8f71d5ef-0409-48f6-ac20-c4cfd510bac5","Type":"ContainerStarted","Data":"a7535558faea218c32ff12122463836e527efc95964a5690dfb544a09f397518"} Jan 23 08:41:17 crc kubenswrapper[4860]: I0123 08:41:17.658195 4860 scope.go:117] "RemoveContainer" containerID="bf87092cbc8efb1fd7a4f62bbf963bcc7cec9b83b84d790565e67c3a0ab01e1e" Jan 23 08:41:17 crc kubenswrapper[4860]: I0123 08:41:17.658550 4860 scope.go:117] "RemoveContainer" containerID="462ba0c28392c7cdd1a31a9bff39ea11524443ec39402b6ec8309c878c908d85" Jan 23 08:41:17 crc kubenswrapper[4860]: I0123 08:41:17.658660 4860 scope.go:117] "RemoveContainer" containerID="3b2314547d04fb11862ad2e35fcb4667f5197a086255de5f344ddcfc15479ab7" Jan 23 08:41:18 crc kubenswrapper[4860]: I0123 08:41:18.657997 4860 scope.go:117] "RemoveContainer" containerID="5d4e2d22c3858e6347a261e6b2618faba861a3959e1e86adfea74a4d705f7aef" Jan 23 08:41:31 crc kubenswrapper[4860]: E0123 08:41:31.555975 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/tripleomastercentos9/openstack-collectd:current-tripleo" Jan 23 08:41:31 crc kubenswrapper[4860]: E0123 08:41:31.557405 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:smoketest-collectd,Image:quay.io/tripleomastercentos9/openstack-collectd:current-tripleo,Command:[/smoketest_collectd_entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CLOUDNAME,Value:smoke1,ValueFrom:nil,},EnvVar{Name:ELASTICSEARCH_AUTH_PASS,Value:JnmV5ezw8H6hsJelA0KMAa6A,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_AUTH_TOKEN,Value:eyJhbGciOiJSUzI1NiIsImtpZCI6InF6SnFxNFFjbVk5VmJQZ2dNMmUxdHFmTlJlVWx4UDhSTlhIamV3RUx4WU0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTc2OTE2MTI1OCwiaWF0IjoxNzY5MTU3NjU4LCJpc3MiOiJodHRwczovL2t1YmVybmV0ZXMuZGVmYXVsdC5zdmMiLCJqdGkiOiIwYWU5ODI3Zi1mYjIzLTQzODQtOWJhMi05NGJjMGU1YWMxYjYiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6InNlcnZpY2UtdGVsZW1ldHJ5Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6InN0Zi1wcm9tZXRoZXVzLXJlYWRlciIsInVpZCI6IjMwNGEwNDA1LWE0NDAtNDU4Ni1hOGE0LTgzZTZiOWVjY2M1NiJ9fSwibmJmIjoxNzY5MTU3NjU4LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c2VydmljZS10ZWxlbWV0cnk6c3RmLXByb21ldGhldXMtcmVhZGVyIn0.PKmZUNAl5EKoZVSgSeGfldbZ-03hY543jxFMYne51ZZWonqlvOdMwWpErxW_hNVbf8cyb9UQtScj-F_ix31YY9U3_j6n_YjNPVMz0w2oZsBnzs1wrhLe5E4xDXQhgC7HdKFdrhGnJiiauo1AZVmpEsI8zEHbiSXflIDWOvT3BKhV8ggkc2iFZIcpsJ4lFodrh-N4I-M7Df7XbFJEKDZsWecc3wBN2Nl6oUDwEbTUSukn1vHjJK0QEjtmxSLTZMdTbeM9jbLUqwJ9R3v2x2Qsui6m8fBaIEvv8eI4eMwz3ykriS7Udnirlo6MuTbQtdMYuPmShsp2xqdvU7TjkXGrdG41ebtsjuMfVcHl9Q2tznP5GG9YIiACY1iy9LULHZQmmQPd-XVnqa9opKaDdstgJzMJJY9b7M_i3yceogTICwf9HipP6ulQBlQccZ4TGn5oRKIhypbFGblaX_xHSt4wph4xfQ2FZpU4Ij2cYgaXUmQa5Q5aljOw28lSeHBtfDZyXw7IiMHVzU3ZSyarukGV6mizbpIahjrShbBTlP42absjViL9NJnBrI5gNtPIGFW4SA_vIovMDPz9mkOepUov7_kcSHX7V5srmjsmTH3QeTdqC-x8dOtnGvtL1Kn_b1fgb35kifP2b2oHvQZu8Dx6So3wVpYUVGzZVioTCPt010M,ValueFrom:nil,},EnvVar{Name:OBSERVABILITY_STRATEGY,Value:<>,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:collectd-config,ReadOnly:false,MountPath:/etc/minimal-collectd.conf.template,SubPath:minimal-collectd.conf.template,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sensubility-config,ReadOnly:false,MountPath:/etc/collectd-sensubility.conf,SubPath:collectd-sensubility.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:healthcheck-log,ReadOnly:false,MountPath:/healthcheck.log,SubPath:healthcheck.log,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:collectd-entrypoint-script,ReadOnly:false,MountPath:/smoketest_collectd_entrypoint.sh,SubPath:smoketest_collectd_entrypoint.sh,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djwfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod stf-smoketest-smoke1-qhvxd_service-telemetry(95d017f1-dc34-4fee-adc1-081a6969152c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:41:33 crc kubenswrapper[4860]: I0123 08:41:33.435995 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"aeb71573-bc5e-495f-80ed-ed870a231d05","Type":"ContainerStarted","Data":"0ed72f385b650e707eed60d350d836bff85ecb86ccdbd291501d099baebf4ba2"} Jan 23 08:41:34 crc kubenswrapper[4860]: I0123 08:41:34.444306 4860 generic.go:334] "Generic (PLEG): container finished" podID="aeb71573-bc5e-495f-80ed-ed870a231d05" containerID="0ed72f385b650e707eed60d350d836bff85ecb86ccdbd291501d099baebf4ba2" exitCode=0 Jan 23 08:41:34 crc kubenswrapper[4860]: I0123 08:41:34.444743 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"aeb71573-bc5e-495f-80ed-ed870a231d05","Type":"ContainerDied","Data":"0ed72f385b650e707eed60d350d836bff85ecb86ccdbd291501d099baebf4ba2"} Jan 23 08:41:35 crc kubenswrapper[4860]: I0123 08:41:35.457206 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7" event={"ID":"e79ec9dc-fb1a-4807-815d-3a74e95f0ffe","Type":"ContainerStarted","Data":"07ca419d10b3ab15b767088a2c27321ab3412b23de5b8892181ab6f9b5a7ed1b"} Jan 23 08:41:35 crc kubenswrapper[4860]: I0123 08:41:35.467333 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks" event={"ID":"21111e1f-d004-4b39-b149-d7d470f5c096","Type":"ContainerStarted","Data":"e7167fc078c4695ddedf8e1184fc3813e57c76ba89b90f16f380578658568d21"} Jan 23 08:41:35 crc kubenswrapper[4860]: I0123 08:41:35.470545 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf" event={"ID":"50c18fce-7fbd-465c-b950-acfef76ac285","Type":"ContainerStarted","Data":"3e9fbadda5141f386f4c22ac2310ae096df9f60e5f8863b887e42b913e39d654"} Jan 23 08:41:35 crc kubenswrapper[4860]: I0123 08:41:35.474497 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4" event={"ID":"a5e86732-0630-4800-a005-750406521136","Type":"ContainerStarted","Data":"03426bdc5134c31ed0df5dda9f813bf6bbf28bcb970b4a7165243ff71f1b47b0"} Jan 23 08:41:35 crc kubenswrapper[4860]: I0123 08:41:35.798774 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 23 08:41:35 crc kubenswrapper[4860]: I0123 08:41:35.969976 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_aeb71573-bc5e-495f-80ed-ed870a231d05/curl/0.log" Jan 23 08:41:36 crc kubenswrapper[4860]: I0123 08:41:36.004767 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lwjc\" (UniqueName: \"kubernetes.io/projected/aeb71573-bc5e-495f-80ed-ed870a231d05-kube-api-access-5lwjc\") pod \"aeb71573-bc5e-495f-80ed-ed870a231d05\" (UID: \"aeb71573-bc5e-495f-80ed-ed870a231d05\") " Jan 23 08:41:36 crc kubenswrapper[4860]: I0123 08:41:36.010377 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb71573-bc5e-495f-80ed-ed870a231d05-kube-api-access-5lwjc" (OuterVolumeSpecName: "kube-api-access-5lwjc") pod "aeb71573-bc5e-495f-80ed-ed870a231d05" (UID: "aeb71573-bc5e-495f-80ed-ed870a231d05"). InnerVolumeSpecName "kube-api-access-5lwjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:41:36 crc kubenswrapper[4860]: I0123 08:41:36.105967 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lwjc\" (UniqueName: \"kubernetes.io/projected/aeb71573-bc5e-495f-80ed-ed870a231d05-kube-api-access-5lwjc\") on node \"crc\" DevicePath \"\"" Jan 23 08:41:36 crc kubenswrapper[4860]: I0123 08:41:36.222050 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-cngd4_55117c66-88ef-4d9e-9e07-3c228a43a0d8/prometheus-webhook-snmp/0.log" Jan 23 08:41:36 crc kubenswrapper[4860]: I0123 08:41:36.485477 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 23 08:41:36 crc kubenswrapper[4860]: I0123 08:41:36.485421 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"aeb71573-bc5e-495f-80ed-ed870a231d05","Type":"ContainerDied","Data":"330ff050d146c5975a1348f402af5fc253d17b4409c947d94f601427ddbc8937"} Jan 23 08:41:36 crc kubenswrapper[4860]: I0123 08:41:36.485631 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="330ff050d146c5975a1348f402af5fc253d17b4409c947d94f601427ddbc8937" Jan 23 08:41:54 crc kubenswrapper[4860]: E0123 08:41:54.735668 4860 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/tripleomastercentos9/openstack-ceilometer-notification:current-tripleo" Jan 23 08:41:54 crc kubenswrapper[4860]: E0123 08:41:54.736411 4860 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:smoketest-ceilometer,Image:quay.io/tripleomastercentos9/openstack-ceilometer-notification:current-tripleo,Command:[/smoketest_ceilometer_entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CLOUDNAME,Value:smoke1,ValueFrom:nil,},EnvVar{Name:ELASTICSEARCH_AUTH_PASS,Value:JnmV5ezw8H6hsJelA0KMAa6A,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_AUTH_TOKEN,Value:eyJhbGciOiJSUzI1NiIsImtpZCI6InF6SnFxNFFjbVk5VmJQZ2dNMmUxdHFmTlJlVWx4UDhSTlhIamV3RUx4WU0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTc2OTE2MTI1OCwiaWF0IjoxNzY5MTU3NjU4LCJpc3MiOiJodHRwczovL2t1YmVybmV0ZXMuZGVmYXVsdC5zdmMiLCJqdGkiOiIwYWU5ODI3Zi1mYjIzLTQzODQtOWJhMi05NGJjMGU1YWMxYjYiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6InNlcnZpY2UtdGVsZW1ldHJ5Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6InN0Zi1wcm9tZXRoZXVzLXJlYWRlciIsInVpZCI6IjMwNGEwNDA1LWE0NDAtNDU4Ni1hOGE0LTgzZTZiOWVjY2M1NiJ9fSwibmJmIjoxNzY5MTU3NjU4LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c2VydmljZS10ZWxlbWV0cnk6c3RmLXByb21ldGhldXMtcmVhZGVyIn0.PKmZUNAl5EKoZVSgSeGfldbZ-03hY543jxFMYne51ZZWonqlvOdMwWpErxW_hNVbf8cyb9UQtScj-F_ix31YY9U3_j6n_YjNPVMz0w2oZsBnzs1wrhLe5E4xDXQhgC7HdKFdrhGnJiiauo1AZVmpEsI8zEHbiSXflIDWOvT3BKhV8ggkc2iFZIcpsJ4lFodrh-N4I-M7Df7XbFJEKDZsWecc3wBN2Nl6oUDwEbTUSukn1vHjJK0QEjtmxSLTZMdTbeM9jbLUqwJ9R3v2x2Qsui6m8fBaIEvv8eI4eMwz3ykriS7Udnirlo6MuTbQtdMYuPmShsp2xqdvU7TjkXGrdG41ebtsjuMfVcHl9Q2tznP5GG9YIiACY1iy9LULHZQmmQPd-XVnqa9opKaDdstgJzMJJY9b7M_i3yceogTICwf9HipP6ulQBlQccZ4TGn5oRKIhypbFGblaX_xHSt4wph4xfQ2FZpU4Ij2cYgaXUmQa5Q5aljOw28lSeHBtfDZyXw7IiMHVzU3ZSyarukGV6mizbpIahjrShbBTlP42absjViL9NJnBrI5gNtPIGFW4SA_vIovMDPz9mkOepUov7_kcSHX7V5srmjsmTH3QeTdqC-x8dOtnGvtL1Kn_b1fgb35kifP2b2oHvQZu8Dx6So3wVpYUVGzZVioTCPt010M,ValueFrom:nil,},EnvVar{Name:OBSERVABILITY_STRATEGY,Value:<>,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ceilometer-publisher,ReadOnly:false,MountPath:/ceilometer_publish.py,SubPath:ceilometer_publish.py,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceilometer-entrypoint-script,ReadOnly:false,MountPath:/smoketest_ceilometer_entrypoint.sh,SubPath:smoketest_ceilometer_entrypoint.sh,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djwfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod stf-smoketest-smoke1-qhvxd_service-telemetry(95d017f1-dc34-4fee-adc1-081a6969152c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:41:54 crc kubenswrapper[4860]: E0123 08:41:54.737699 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"smoketest-collectd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"smoketest-ceilometer\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="service-telemetry/stf-smoketest-smoke1-qhvxd" podUID="95d017f1-dc34-4fee-adc1-081a6969152c" Jan 23 08:41:56 crc kubenswrapper[4860]: I0123 08:41:56.775491 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:41:56 crc kubenswrapper[4860]: I0123 08:41:56.775748 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:41:56 crc kubenswrapper[4860]: E0123 08:41:56.864839 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"smoketest-ceilometer\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/tripleomastercentos9/openstack-ceilometer-notification:current-tripleo\\\"\"" pod="service-telemetry/stf-smoketest-smoke1-qhvxd" podUID="95d017f1-dc34-4fee-adc1-081a6969152c" Jan 23 08:41:57 crc kubenswrapper[4860]: I0123 08:41:57.636748 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qhvxd" event={"ID":"95d017f1-dc34-4fee-adc1-081a6969152c","Type":"ContainerStarted","Data":"6c8c805634f97c2adbf3312b85249a5b20b8c5d04732934b4ed155fb8f71e73d"} Jan 23 08:41:57 crc kubenswrapper[4860]: E0123 08:41:57.639043 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"smoketest-ceilometer\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/tripleomastercentos9/openstack-ceilometer-notification:current-tripleo\\\"\"" pod="service-telemetry/stf-smoketest-smoke1-qhvxd" podUID="95d017f1-dc34-4fee-adc1-081a6969152c" Jan 23 08:42:06 crc kubenswrapper[4860]: I0123 08:42:06.415148 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-cngd4_55117c66-88ef-4d9e-9e07-3c228a43a0d8/prometheus-webhook-snmp/0.log" Jan 23 08:42:12 crc kubenswrapper[4860]: I0123 08:42:12.762173 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qhvxd" event={"ID":"95d017f1-dc34-4fee-adc1-081a6969152c","Type":"ContainerStarted","Data":"65938f1e57b9fa004e0a2eb2c23b92f7f277ec88073b5f8fbb16282a9a3c8ab3"} Jan 23 08:42:12 crc kubenswrapper[4860]: I0123 08:42:12.784530 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-qhvxd" podStartSLOduration=2.503098331 podStartE2EDuration="1m0.784515262s" podCreationTimestamp="2026-01-23 08:41:12 +0000 UTC" firstStartedPulling="2026-01-23 08:41:13.257408861 +0000 UTC m=+1579.885459046" lastFinishedPulling="2026-01-23 08:42:11.538825782 +0000 UTC m=+1638.166875977" observedRunningTime="2026-01-23 08:42:12.782502192 +0000 UTC m=+1639.410552377" watchObservedRunningTime="2026-01-23 08:42:12.784515262 +0000 UTC m=+1639.412565447" Jan 23 08:42:26 crc kubenswrapper[4860]: I0123 08:42:26.777980 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:42:26 crc kubenswrapper[4860]: I0123 08:42:26.778378 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:42:30 crc kubenswrapper[4860]: I0123 08:42:30.899503 4860 generic.go:334] "Generic (PLEG): container finished" podID="95d017f1-dc34-4fee-adc1-081a6969152c" containerID="6c8c805634f97c2adbf3312b85249a5b20b8c5d04732934b4ed155fb8f71e73d" exitCode=0 Jan 23 08:42:30 crc kubenswrapper[4860]: I0123 08:42:30.899616 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qhvxd" event={"ID":"95d017f1-dc34-4fee-adc1-081a6969152c","Type":"ContainerDied","Data":"6c8c805634f97c2adbf3312b85249a5b20b8c5d04732934b4ed155fb8f71e73d"} Jan 23 08:42:30 crc kubenswrapper[4860]: I0123 08:42:30.900603 4860 scope.go:117] "RemoveContainer" containerID="6c8c805634f97c2adbf3312b85249a5b20b8c5d04732934b4ed155fb8f71e73d" Jan 23 08:42:44 crc kubenswrapper[4860]: I0123 08:42:44.000519 4860 generic.go:334] "Generic (PLEG): container finished" podID="95d017f1-dc34-4fee-adc1-081a6969152c" containerID="65938f1e57b9fa004e0a2eb2c23b92f7f277ec88073b5f8fbb16282a9a3c8ab3" exitCode=0 Jan 23 08:42:44 crc kubenswrapper[4860]: I0123 08:42:44.000823 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qhvxd" event={"ID":"95d017f1-dc34-4fee-adc1-081a6969152c","Type":"ContainerDied","Data":"65938f1e57b9fa004e0a2eb2c23b92f7f277ec88073b5f8fbb16282a9a3c8ab3"} Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.264126 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.458655 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-ceilometer-publisher\") pod \"95d017f1-dc34-4fee-adc1-081a6969152c\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.458719 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-healthcheck-log\") pod \"95d017f1-dc34-4fee-adc1-081a6969152c\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.458738 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-collectd-entrypoint-script\") pod \"95d017f1-dc34-4fee-adc1-081a6969152c\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.458768 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djwfp\" (UniqueName: \"kubernetes.io/projected/95d017f1-dc34-4fee-adc1-081a6969152c-kube-api-access-djwfp\") pod \"95d017f1-dc34-4fee-adc1-081a6969152c\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.458804 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-ceilometer-entrypoint-script\") pod \"95d017f1-dc34-4fee-adc1-081a6969152c\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.458837 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-collectd-config\") pod \"95d017f1-dc34-4fee-adc1-081a6969152c\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.458869 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-sensubility-config\") pod \"95d017f1-dc34-4fee-adc1-081a6969152c\" (UID: \"95d017f1-dc34-4fee-adc1-081a6969152c\") " Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.465364 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95d017f1-dc34-4fee-adc1-081a6969152c-kube-api-access-djwfp" (OuterVolumeSpecName: "kube-api-access-djwfp") pod "95d017f1-dc34-4fee-adc1-081a6969152c" (UID: "95d017f1-dc34-4fee-adc1-081a6969152c"). InnerVolumeSpecName "kube-api-access-djwfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.478195 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "95d017f1-dc34-4fee-adc1-081a6969152c" (UID: "95d017f1-dc34-4fee-adc1-081a6969152c"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.478779 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "95d017f1-dc34-4fee-adc1-081a6969152c" (UID: "95d017f1-dc34-4fee-adc1-081a6969152c"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.478971 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "95d017f1-dc34-4fee-adc1-081a6969152c" (UID: "95d017f1-dc34-4fee-adc1-081a6969152c"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.480186 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "95d017f1-dc34-4fee-adc1-081a6969152c" (UID: "95d017f1-dc34-4fee-adc1-081a6969152c"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.482186 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "95d017f1-dc34-4fee-adc1-081a6969152c" (UID: "95d017f1-dc34-4fee-adc1-081a6969152c"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.486696 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "95d017f1-dc34-4fee-adc1-081a6969152c" (UID: "95d017f1-dc34-4fee-adc1-081a6969152c"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.560331 4860 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.560381 4860 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.560397 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djwfp\" (UniqueName: \"kubernetes.io/projected/95d017f1-dc34-4fee-adc1-081a6969152c-kube-api-access-djwfp\") on node \"crc\" DevicePath \"\"" Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.560409 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.560421 4860 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.560436 4860 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:42:45 crc kubenswrapper[4860]: I0123 08:42:45.560446 4860 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/95d017f1-dc34-4fee-adc1-081a6969152c-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 23 08:42:46 crc kubenswrapper[4860]: I0123 08:42:46.018132 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-qhvxd" event={"ID":"95d017f1-dc34-4fee-adc1-081a6969152c","Type":"ContainerDied","Data":"a0b5b2f786c7a5830cb026f47c1139949ec38329243119684c3f66b6d55f50a6"} Jan 23 08:42:46 crc kubenswrapper[4860]: I0123 08:42:46.018451 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0b5b2f786c7a5830cb026f47c1139949ec38329243119684c3f66b6d55f50a6" Jan 23 08:42:46 crc kubenswrapper[4860]: I0123 08:42:46.018296 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-qhvxd" Jan 23 08:42:47 crc kubenswrapper[4860]: I0123 08:42:47.442984 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-qhvxd_95d017f1-dc34-4fee-adc1-081a6969152c/smoketest-collectd/0.log" Jan 23 08:42:47 crc kubenswrapper[4860]: I0123 08:42:47.750669 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-qhvxd_95d017f1-dc34-4fee-adc1-081a6969152c/smoketest-ceilometer/0.log" Jan 23 08:42:48 crc kubenswrapper[4860]: I0123 08:42:48.065440 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-d744d_163502a6-c7b9-4a7f-af7e-3bc7b1d066e3/default-interconnect/0.log" Jan 23 08:42:48 crc kubenswrapper[4860]: I0123 08:42:48.330855 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7_e79ec9dc-fb1a-4807-815d-3a74e95f0ffe/bridge/2.log" Jan 23 08:42:48 crc kubenswrapper[4860]: I0123 08:42:48.587338 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-jhnf7_e79ec9dc-fb1a-4807-815d-3a74e95f0ffe/sg-core/0.log" Jan 23 08:42:48 crc kubenswrapper[4860]: I0123 08:42:48.864713 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq_8f71d5ef-0409-48f6-ac20-c4cfd510bac5/bridge/2.log" Jan 23 08:42:49 crc kubenswrapper[4860]: I0123 08:42:49.198344 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-74df4947b6-2gmcq_8f71d5ef-0409-48f6-ac20-c4cfd510bac5/sg-core/0.log" Jan 23 08:42:49 crc kubenswrapper[4860]: I0123 08:42:49.549003 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks_21111e1f-d004-4b39-b149-d7d470f5c096/bridge/2.log" Jan 23 08:42:49 crc kubenswrapper[4860]: I0123 08:42:49.839002 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-l8lks_21111e1f-d004-4b39-b149-d7d470f5c096/sg-core/0.log" Jan 23 08:42:50 crc kubenswrapper[4860]: I0123 08:42:50.135821 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4_a5e86732-0630-4800-a005-750406521136/bridge/2.log" Jan 23 08:42:50 crc kubenswrapper[4860]: I0123 08:42:50.404232 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-d8c8898cb-wgdg4_a5e86732-0630-4800-a005-750406521136/sg-core/0.log" Jan 23 08:42:50 crc kubenswrapper[4860]: I0123 08:42:50.650369 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf_50c18fce-7fbd-465c-b950-acfef76ac285/bridge/2.log" Jan 23 08:42:50 crc kubenswrapper[4860]: I0123 08:42:50.881592 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-p8fsf_50c18fce-7fbd-465c-b950-acfef76ac285/sg-core/0.log" Jan 23 08:42:53 crc kubenswrapper[4860]: I0123 08:42:53.698256 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-c6d49cd4c-gg9cf_69a7a482-ef66-4336-9668-20669049a86b/operator/0.log" Jan 23 08:42:53 crc kubenswrapper[4860]: I0123 08:42:53.992157 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_e1ec65d6-ed8f-4008-8d71-ce558a169641/prometheus/0.log" Jan 23 08:42:54 crc kubenswrapper[4860]: I0123 08:42:54.369170 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_da5fc4ce-0bad-4e92-9ed7-3b940e128b4f/elasticsearch/0.log" Jan 23 08:42:54 crc kubenswrapper[4860]: I0123 08:42:54.644149 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-cngd4_55117c66-88ef-4d9e-9e07-3c228a43a0d8/prometheus-webhook-snmp/0.log" Jan 23 08:42:54 crc kubenswrapper[4860]: I0123 08:42:54.942616 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_8850e899-d81c-4f1d-b8f1-62c4e8b7ac9f/alertmanager/0.log" Jan 23 08:42:56 crc kubenswrapper[4860]: I0123 08:42:56.775686 4860 patch_prober.go:28] interesting pod/machine-config-daemon-tk8df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:42:56 crc kubenswrapper[4860]: I0123 08:42:56.776288 4860 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:42:56 crc kubenswrapper[4860]: I0123 08:42:56.776348 4860 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" Jan 23 08:42:56 crc kubenswrapper[4860]: I0123 08:42:56.777203 4860 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47"} pod="openshift-machine-config-operator/machine-config-daemon-tk8df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:42:56 crc kubenswrapper[4860]: I0123 08:42:56.777281 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerName="machine-config-daemon" containerID="cri-o://0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" gracePeriod=600 Jan 23 08:42:57 crc kubenswrapper[4860]: E0123 08:42:57.000308 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:42:57 crc kubenswrapper[4860]: I0123 08:42:57.098761 4860 generic.go:334] "Generic (PLEG): container finished" podID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" exitCode=0 Jan 23 08:42:57 crc kubenswrapper[4860]: I0123 08:42:57.098806 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerDied","Data":"0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47"} Jan 23 08:42:57 crc kubenswrapper[4860]: I0123 08:42:57.099259 4860 scope.go:117] "RemoveContainer" containerID="cfb686e44ef0e61ba024387441d8514e0c284409a4a0bfcf8b79aaad27b5ee16" Jan 23 08:42:57 crc kubenswrapper[4860]: I0123 08:42:57.099764 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:42:57 crc kubenswrapper[4860]: E0123 08:42:57.099959 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:43:09 crc kubenswrapper[4860]: I0123 08:43:09.649672 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-fc4d6dcb5-bxskz_de675024-06bf-4117-a893-ad173f468f8f/operator/0.log" Jan 23 08:43:09 crc kubenswrapper[4860]: I0123 08:43:09.657465 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:43:09 crc kubenswrapper[4860]: E0123 08:43:09.657727 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:43:12 crc kubenswrapper[4860]: I0123 08:43:12.910803 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-c6d49cd4c-gg9cf_69a7a482-ef66-4336-9668-20669049a86b/operator/0.log" Jan 23 08:43:13 crc kubenswrapper[4860]: I0123 08:43:13.175647 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_2dd093f5-013c-45f6-8d5a-3a1b09dbc0c6/qdr/0.log" Jan 23 08:43:23 crc kubenswrapper[4860]: I0123 08:43:23.661533 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:43:23 crc kubenswrapper[4860]: E0123 08:43:23.662317 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:43:36 crc kubenswrapper[4860]: I0123 08:43:36.657525 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:43:36 crc kubenswrapper[4860]: E0123 08:43:36.659291 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:43:47 crc kubenswrapper[4860]: I0123 08:43:47.657621 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:43:47 crc kubenswrapper[4860]: E0123 08:43:47.658385 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:43:48 crc kubenswrapper[4860]: I0123 08:43:48.992589 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rnhjk/must-gather-lqnzc"] Jan 23 08:43:48 crc kubenswrapper[4860]: E0123 08:43:48.992902 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d017f1-dc34-4fee-adc1-081a6969152c" containerName="smoketest-collectd" Jan 23 08:43:48 crc kubenswrapper[4860]: I0123 08:43:48.992915 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d017f1-dc34-4fee-adc1-081a6969152c" containerName="smoketest-collectd" Jan 23 08:43:48 crc kubenswrapper[4860]: E0123 08:43:48.992934 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d017f1-dc34-4fee-adc1-081a6969152c" containerName="smoketest-ceilometer" Jan 23 08:43:48 crc kubenswrapper[4860]: I0123 08:43:48.992941 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d017f1-dc34-4fee-adc1-081a6969152c" containerName="smoketest-ceilometer" Jan 23 08:43:48 crc kubenswrapper[4860]: E0123 08:43:48.992957 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb71573-bc5e-495f-80ed-ed870a231d05" containerName="curl" Jan 23 08:43:48 crc kubenswrapper[4860]: I0123 08:43:48.992964 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb71573-bc5e-495f-80ed-ed870a231d05" containerName="curl" Jan 23 08:43:48 crc kubenswrapper[4860]: I0123 08:43:48.993145 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d017f1-dc34-4fee-adc1-081a6969152c" containerName="smoketest-ceilometer" Jan 23 08:43:48 crc kubenswrapper[4860]: I0123 08:43:48.993161 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb71573-bc5e-495f-80ed-ed870a231d05" containerName="curl" Jan 23 08:43:48 crc kubenswrapper[4860]: I0123 08:43:48.993172 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d017f1-dc34-4fee-adc1-081a6969152c" containerName="smoketest-collectd" Jan 23 08:43:48 crc kubenswrapper[4860]: I0123 08:43:48.993963 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rnhjk/must-gather-lqnzc" Jan 23 08:43:49 crc kubenswrapper[4860]: I0123 08:43:49.006970 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rnhjk"/"openshift-service-ca.crt" Jan 23 08:43:49 crc kubenswrapper[4860]: I0123 08:43:49.008222 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rnhjk/must-gather-lqnzc"] Jan 23 08:43:49 crc kubenswrapper[4860]: I0123 08:43:49.009288 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rnhjk"/"kube-root-ca.crt" Jan 23 08:43:49 crc kubenswrapper[4860]: I0123 08:43:49.060370 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxx8g\" (UniqueName: \"kubernetes.io/projected/056a76c7-1df2-4de3-b014-23b052d13ad8-kube-api-access-lxx8g\") pod \"must-gather-lqnzc\" (UID: \"056a76c7-1df2-4de3-b014-23b052d13ad8\") " pod="openshift-must-gather-rnhjk/must-gather-lqnzc" Jan 23 08:43:49 crc kubenswrapper[4860]: I0123 08:43:49.060446 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/056a76c7-1df2-4de3-b014-23b052d13ad8-must-gather-output\") pod \"must-gather-lqnzc\" (UID: \"056a76c7-1df2-4de3-b014-23b052d13ad8\") " pod="openshift-must-gather-rnhjk/must-gather-lqnzc" Jan 23 08:43:49 crc kubenswrapper[4860]: I0123 08:43:49.162066 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxx8g\" (UniqueName: \"kubernetes.io/projected/056a76c7-1df2-4de3-b014-23b052d13ad8-kube-api-access-lxx8g\") pod \"must-gather-lqnzc\" (UID: \"056a76c7-1df2-4de3-b014-23b052d13ad8\") " pod="openshift-must-gather-rnhjk/must-gather-lqnzc" Jan 23 08:43:49 crc kubenswrapper[4860]: I0123 08:43:49.162125 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/056a76c7-1df2-4de3-b014-23b052d13ad8-must-gather-output\") pod \"must-gather-lqnzc\" (UID: \"056a76c7-1df2-4de3-b014-23b052d13ad8\") " pod="openshift-must-gather-rnhjk/must-gather-lqnzc" Jan 23 08:43:49 crc kubenswrapper[4860]: I0123 08:43:49.162714 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/056a76c7-1df2-4de3-b014-23b052d13ad8-must-gather-output\") pod \"must-gather-lqnzc\" (UID: \"056a76c7-1df2-4de3-b014-23b052d13ad8\") " pod="openshift-must-gather-rnhjk/must-gather-lqnzc" Jan 23 08:43:49 crc kubenswrapper[4860]: I0123 08:43:49.183865 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxx8g\" (UniqueName: \"kubernetes.io/projected/056a76c7-1df2-4de3-b014-23b052d13ad8-kube-api-access-lxx8g\") pod \"must-gather-lqnzc\" (UID: \"056a76c7-1df2-4de3-b014-23b052d13ad8\") " pod="openshift-must-gather-rnhjk/must-gather-lqnzc" Jan 23 08:43:49 crc kubenswrapper[4860]: I0123 08:43:49.311524 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rnhjk/must-gather-lqnzc" Jan 23 08:43:49 crc kubenswrapper[4860]: I0123 08:43:49.545715 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rnhjk/must-gather-lqnzc"] Jan 23 08:43:50 crc kubenswrapper[4860]: I0123 08:43:50.499073 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rnhjk/must-gather-lqnzc" event={"ID":"056a76c7-1df2-4de3-b014-23b052d13ad8","Type":"ContainerStarted","Data":"94dc811a029a8e38ca72ce8702c86818bc3899349dbebe6101f7eeeb9b146052"} Jan 23 08:44:00 crc kubenswrapper[4860]: I0123 08:44:00.658157 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:44:00 crc kubenswrapper[4860]: E0123 08:44:00.658956 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:44:05 crc kubenswrapper[4860]: I0123 08:44:05.612342 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rnhjk/must-gather-lqnzc" event={"ID":"056a76c7-1df2-4de3-b014-23b052d13ad8","Type":"ContainerStarted","Data":"0c2b06f015167ed8bae5e42da6d1c9e3d17f14bf2935cf5f2f25cde289ea4862"} Jan 23 08:44:06 crc kubenswrapper[4860]: I0123 08:44:06.636284 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rnhjk/must-gather-lqnzc" event={"ID":"056a76c7-1df2-4de3-b014-23b052d13ad8","Type":"ContainerStarted","Data":"647e457640627176188c0fd4801150966b439dc9719f21e48aca7df8a0830883"} Jan 23 08:44:06 crc kubenswrapper[4860]: I0123 08:44:06.653525 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rnhjk/must-gather-lqnzc" podStartSLOduration=3.444392855 podStartE2EDuration="18.653510833s" podCreationTimestamp="2026-01-23 08:43:48 +0000 UTC" firstStartedPulling="2026-01-23 08:43:49.55470385 +0000 UTC m=+1736.182754035" lastFinishedPulling="2026-01-23 08:44:04.763821818 +0000 UTC m=+1751.391872013" observedRunningTime="2026-01-23 08:44:06.650654302 +0000 UTC m=+1753.278704497" watchObservedRunningTime="2026-01-23 08:44:06.653510833 +0000 UTC m=+1753.281561018" Jan 23 08:44:12 crc kubenswrapper[4860]: I0123 08:44:12.658430 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:44:12 crc kubenswrapper[4860]: E0123 08:44:12.659606 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:44:23 crc kubenswrapper[4860]: I0123 08:44:23.662237 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:44:23 crc kubenswrapper[4860]: E0123 08:44:23.662944 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:44:35 crc kubenswrapper[4860]: I0123 08:44:35.657781 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:44:35 crc kubenswrapper[4860]: E0123 08:44:35.658524 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:44:45 crc kubenswrapper[4860]: I0123 08:44:45.022952 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-q7sm9_d8bc0ea0-46f4-4c4b-b008-394713ae0863/control-plane-machine-set-operator/0.log" Jan 23 08:44:45 crc kubenswrapper[4860]: I0123 08:44:45.164278 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2g9xr_25973908-6ad1-4e3b-a493-de9f5baef4e8/kube-rbac-proxy/0.log" Jan 23 08:44:45 crc kubenswrapper[4860]: I0123 08:44:45.179327 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2g9xr_25973908-6ad1-4e3b-a493-de9f5baef4e8/machine-api-operator/0.log" Jan 23 08:44:50 crc kubenswrapper[4860]: I0123 08:44:50.657718 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:44:50 crc kubenswrapper[4860]: E0123 08:44:50.658411 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:44:56 crc kubenswrapper[4860]: I0123 08:44:56.109767 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-qptpz_0f30f168-993f-4527-9e0b-2b12457e1547/cert-manager-controller/0.log" Jan 23 08:44:56 crc kubenswrapper[4860]: I0123 08:44:56.274380 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-75t87_89ef0dff-b1b7-49c5-9fcc-d94ffaefcf7b/cert-manager-webhook/0.log" Jan 23 08:44:56 crc kubenswrapper[4860]: I0123 08:44:56.304740 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-n7hvx_83613435-16bb-432d-87af-71aa80fecf79/cert-manager-cainjector/0.log" Jan 23 08:45:00 crc kubenswrapper[4860]: I0123 08:45:00.138423 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q"] Jan 23 08:45:00 crc kubenswrapper[4860]: I0123 08:45:00.139768 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q" Jan 23 08:45:00 crc kubenswrapper[4860]: I0123 08:45:00.141985 4860 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 08:45:00 crc kubenswrapper[4860]: I0123 08:45:00.141991 4860 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 08:45:00 crc kubenswrapper[4860]: I0123 08:45:00.155738 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q"] Jan 23 08:45:00 crc kubenswrapper[4860]: I0123 08:45:00.240852 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc8ce857-df0b-46df-94be-64645e07b53c-config-volume\") pod \"collect-profiles-29485965-h6g8q\" (UID: \"bc8ce857-df0b-46df-94be-64645e07b53c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q" Jan 23 08:45:00 crc kubenswrapper[4860]: I0123 08:45:00.240990 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc8ce857-df0b-46df-94be-64645e07b53c-secret-volume\") pod \"collect-profiles-29485965-h6g8q\" (UID: \"bc8ce857-df0b-46df-94be-64645e07b53c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q" Jan 23 08:45:00 crc kubenswrapper[4860]: I0123 08:45:00.241130 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbfpb\" (UniqueName: \"kubernetes.io/projected/bc8ce857-df0b-46df-94be-64645e07b53c-kube-api-access-qbfpb\") pod \"collect-profiles-29485965-h6g8q\" (UID: \"bc8ce857-df0b-46df-94be-64645e07b53c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q" Jan 23 08:45:00 crc kubenswrapper[4860]: I0123 08:45:00.342495 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc8ce857-df0b-46df-94be-64645e07b53c-secret-volume\") pod \"collect-profiles-29485965-h6g8q\" (UID: \"bc8ce857-df0b-46df-94be-64645e07b53c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q" Jan 23 08:45:00 crc kubenswrapper[4860]: I0123 08:45:00.342577 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbfpb\" (UniqueName: \"kubernetes.io/projected/bc8ce857-df0b-46df-94be-64645e07b53c-kube-api-access-qbfpb\") pod \"collect-profiles-29485965-h6g8q\" (UID: \"bc8ce857-df0b-46df-94be-64645e07b53c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q" Jan 23 08:45:00 crc kubenswrapper[4860]: I0123 08:45:00.342636 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc8ce857-df0b-46df-94be-64645e07b53c-config-volume\") pod \"collect-profiles-29485965-h6g8q\" (UID: \"bc8ce857-df0b-46df-94be-64645e07b53c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q" Jan 23 08:45:00 crc kubenswrapper[4860]: I0123 08:45:00.344045 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc8ce857-df0b-46df-94be-64645e07b53c-config-volume\") pod \"collect-profiles-29485965-h6g8q\" (UID: \"bc8ce857-df0b-46df-94be-64645e07b53c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q" Jan 23 08:45:00 crc kubenswrapper[4860]: I0123 08:45:00.348728 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc8ce857-df0b-46df-94be-64645e07b53c-secret-volume\") pod \"collect-profiles-29485965-h6g8q\" (UID: \"bc8ce857-df0b-46df-94be-64645e07b53c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q" Jan 23 08:45:00 crc kubenswrapper[4860]: I0123 08:45:00.361765 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbfpb\" (UniqueName: \"kubernetes.io/projected/bc8ce857-df0b-46df-94be-64645e07b53c-kube-api-access-qbfpb\") pod \"collect-profiles-29485965-h6g8q\" (UID: \"bc8ce857-df0b-46df-94be-64645e07b53c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q" Jan 23 08:45:00 crc kubenswrapper[4860]: I0123 08:45:00.458524 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q" Jan 23 08:45:00 crc kubenswrapper[4860]: I0123 08:45:00.882313 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q"] Jan 23 08:45:01 crc kubenswrapper[4860]: I0123 08:45:01.014514 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q" event={"ID":"bc8ce857-df0b-46df-94be-64645e07b53c","Type":"ContainerStarted","Data":"0aa02b3884dc57a7fa4ac628c9472753f50d7fd6a375728e1b05e7f1590230b4"} Jan 23 08:45:02 crc kubenswrapper[4860]: I0123 08:45:02.022050 4860 generic.go:334] "Generic (PLEG): container finished" podID="bc8ce857-df0b-46df-94be-64645e07b53c" containerID="9e38df94097292b6b33d4eaf0f1d566d9c8d4372e0c13590f352e52582ad34c3" exitCode=0 Jan 23 08:45:02 crc kubenswrapper[4860]: I0123 08:45:02.022138 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q" event={"ID":"bc8ce857-df0b-46df-94be-64645e07b53c","Type":"ContainerDied","Data":"9e38df94097292b6b33d4eaf0f1d566d9c8d4372e0c13590f352e52582ad34c3"} Jan 23 08:45:03 crc kubenswrapper[4860]: I0123 08:45:03.255941 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q" Jan 23 08:45:03 crc kubenswrapper[4860]: I0123 08:45:03.384793 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc8ce857-df0b-46df-94be-64645e07b53c-secret-volume\") pod \"bc8ce857-df0b-46df-94be-64645e07b53c\" (UID: \"bc8ce857-df0b-46df-94be-64645e07b53c\") " Jan 23 08:45:03 crc kubenswrapper[4860]: I0123 08:45:03.385997 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbfpb\" (UniqueName: \"kubernetes.io/projected/bc8ce857-df0b-46df-94be-64645e07b53c-kube-api-access-qbfpb\") pod \"bc8ce857-df0b-46df-94be-64645e07b53c\" (UID: \"bc8ce857-df0b-46df-94be-64645e07b53c\") " Jan 23 08:45:03 crc kubenswrapper[4860]: I0123 08:45:03.386067 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc8ce857-df0b-46df-94be-64645e07b53c-config-volume\") pod \"bc8ce857-df0b-46df-94be-64645e07b53c\" (UID: \"bc8ce857-df0b-46df-94be-64645e07b53c\") " Jan 23 08:45:03 crc kubenswrapper[4860]: I0123 08:45:03.386677 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc8ce857-df0b-46df-94be-64645e07b53c-config-volume" (OuterVolumeSpecName: "config-volume") pod "bc8ce857-df0b-46df-94be-64645e07b53c" (UID: "bc8ce857-df0b-46df-94be-64645e07b53c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:45:03 crc kubenswrapper[4860]: I0123 08:45:03.390091 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc8ce857-df0b-46df-94be-64645e07b53c-kube-api-access-qbfpb" (OuterVolumeSpecName: "kube-api-access-qbfpb") pod "bc8ce857-df0b-46df-94be-64645e07b53c" (UID: "bc8ce857-df0b-46df-94be-64645e07b53c"). InnerVolumeSpecName "kube-api-access-qbfpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:45:03 crc kubenswrapper[4860]: I0123 08:45:03.402138 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8ce857-df0b-46df-94be-64645e07b53c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bc8ce857-df0b-46df-94be-64645e07b53c" (UID: "bc8ce857-df0b-46df-94be-64645e07b53c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:45:03 crc kubenswrapper[4860]: I0123 08:45:03.487977 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbfpb\" (UniqueName: \"kubernetes.io/projected/bc8ce857-df0b-46df-94be-64645e07b53c-kube-api-access-qbfpb\") on node \"crc\" DevicePath \"\"" Jan 23 08:45:03 crc kubenswrapper[4860]: I0123 08:45:03.488015 4860 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc8ce857-df0b-46df-94be-64645e07b53c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:45:03 crc kubenswrapper[4860]: I0123 08:45:03.488040 4860 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bc8ce857-df0b-46df-94be-64645e07b53c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:45:04 crc kubenswrapper[4860]: I0123 08:45:04.034309 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q" event={"ID":"bc8ce857-df0b-46df-94be-64645e07b53c","Type":"ContainerDied","Data":"0aa02b3884dc57a7fa4ac628c9472753f50d7fd6a375728e1b05e7f1590230b4"} Jan 23 08:45:04 crc kubenswrapper[4860]: I0123 08:45:04.034348 4860 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aa02b3884dc57a7fa4ac628c9472753f50d7fd6a375728e1b05e7f1590230b4" Jan 23 08:45:04 crc kubenswrapper[4860]: I0123 08:45:04.034380 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-h6g8q" Jan 23 08:45:04 crc kubenswrapper[4860]: I0123 08:45:04.658086 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:45:04 crc kubenswrapper[4860]: E0123 08:45:04.658361 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:45:08 crc kubenswrapper[4860]: I0123 08:45:08.551251 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-68wdp_5bd5ffe6-350f-4c69-bafa-f203a0c1e7bd/prometheus-operator/0.log" Jan 23 08:45:08 crc kubenswrapper[4860]: I0123 08:45:08.725986 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-54d658468c-9jckz_d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b/prometheus-operator-admission-webhook/0.log" Jan 23 08:45:08 crc kubenswrapper[4860]: I0123 08:45:08.820325 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-54d658468c-qzmzn_1305c249-d5a2-488b-9c2c-2f528c3a0f49/prometheus-operator-admission-webhook/0.log" Jan 23 08:45:08 crc kubenswrapper[4860]: I0123 08:45:08.904253 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-zkz7p_ea9793c3-f700-496f-b5b3-330037b6323e/operator/0.log" Jan 23 08:45:08 crc kubenswrapper[4860]: I0123 08:45:08.977827 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-d9lkg_265698c3-44cd-419b-a5db-f35ec2ef8514/perses-operator/0.log" Jan 23 08:45:18 crc kubenswrapper[4860]: I0123 08:45:18.657647 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:45:18 crc kubenswrapper[4860]: E0123 08:45:18.658138 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:45:21 crc kubenswrapper[4860]: I0123 08:45:21.238701 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49_53f57f2b-fcc2-4afb-997e-97a4c3c3e184/util/0.log" Jan 23 08:45:21 crc kubenswrapper[4860]: I0123 08:45:21.360305 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49_53f57f2b-fcc2-4afb-997e-97a4c3c3e184/util/0.log" Jan 23 08:45:21 crc kubenswrapper[4860]: I0123 08:45:21.413994 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49_53f57f2b-fcc2-4afb-997e-97a4c3c3e184/pull/0.log" Jan 23 08:45:21 crc kubenswrapper[4860]: I0123 08:45:21.441478 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49_53f57f2b-fcc2-4afb-997e-97a4c3c3e184/pull/0.log" Jan 23 08:45:21 crc kubenswrapper[4860]: I0123 08:45:21.587107 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49_53f57f2b-fcc2-4afb-997e-97a4c3c3e184/pull/0.log" Jan 23 08:45:21 crc kubenswrapper[4860]: I0123 08:45:21.590183 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49_53f57f2b-fcc2-4afb-997e-97a4c3c3e184/util/0.log" Jan 23 08:45:21 crc kubenswrapper[4860]: I0123 08:45:21.602974 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931avbw49_53f57f2b-fcc2-4afb-997e-97a4c3c3e184/extract/0.log" Jan 23 08:45:21 crc kubenswrapper[4860]: I0123 08:45:21.758375 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c_ce3097f0-7b8c-44b3-b209-fe060e774f70/util/0.log" Jan 23 08:45:21 crc kubenswrapper[4860]: I0123 08:45:21.905767 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c_ce3097f0-7b8c-44b3-b209-fe060e774f70/pull/0.log" Jan 23 08:45:21 crc kubenswrapper[4860]: I0123 08:45:21.914388 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c_ce3097f0-7b8c-44b3-b209-fe060e774f70/util/0.log" Jan 23 08:45:21 crc kubenswrapper[4860]: I0123 08:45:21.941852 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c_ce3097f0-7b8c-44b3-b209-fe060e774f70/pull/0.log" Jan 23 08:45:22 crc kubenswrapper[4860]: I0123 08:45:22.077901 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c_ce3097f0-7b8c-44b3-b209-fe060e774f70/util/0.log" Jan 23 08:45:22 crc kubenswrapper[4860]: I0123 08:45:22.104499 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c_ce3097f0-7b8c-44b3-b209-fe060e774f70/extract/0.log" Jan 23 08:45:22 crc kubenswrapper[4860]: I0123 08:45:22.104964 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6xv2c_ce3097f0-7b8c-44b3-b209-fe060e774f70/pull/0.log" Jan 23 08:45:22 crc kubenswrapper[4860]: I0123 08:45:22.254366 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd_a45c01fc-b998-4a1a-b4ed-5b6aacf9845e/util/0.log" Jan 23 08:45:22 crc kubenswrapper[4860]: I0123 08:45:22.384684 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd_a45c01fc-b998-4a1a-b4ed-5b6aacf9845e/util/0.log" Jan 23 08:45:22 crc kubenswrapper[4860]: I0123 08:45:22.388101 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd_a45c01fc-b998-4a1a-b4ed-5b6aacf9845e/pull/0.log" Jan 23 08:45:22 crc kubenswrapper[4860]: I0123 08:45:22.413454 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd_a45c01fc-b998-4a1a-b4ed-5b6aacf9845e/pull/0.log" Jan 23 08:45:22 crc kubenswrapper[4860]: I0123 08:45:22.559929 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd_a45c01fc-b998-4a1a-b4ed-5b6aacf9845e/pull/0.log" Jan 23 08:45:22 crc kubenswrapper[4860]: I0123 08:45:22.592521 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd_a45c01fc-b998-4a1a-b4ed-5b6aacf9845e/extract/0.log" Jan 23 08:45:22 crc kubenswrapper[4860]: I0123 08:45:22.605170 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5el6dkd_a45c01fc-b998-4a1a-b4ed-5b6aacf9845e/util/0.log" Jan 23 08:45:22 crc kubenswrapper[4860]: I0123 08:45:22.730131 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l_d7e81962-a190-4650-97f2-c0a40bebebe8/util/0.log" Jan 23 08:45:22 crc kubenswrapper[4860]: I0123 08:45:22.884266 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l_d7e81962-a190-4650-97f2-c0a40bebebe8/pull/0.log" Jan 23 08:45:22 crc kubenswrapper[4860]: I0123 08:45:22.885499 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l_d7e81962-a190-4650-97f2-c0a40bebebe8/pull/0.log" Jan 23 08:45:22 crc kubenswrapper[4860]: I0123 08:45:22.920574 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l_d7e81962-a190-4650-97f2-c0a40bebebe8/util/0.log" Jan 23 08:45:23 crc kubenswrapper[4860]: I0123 08:45:23.056719 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l_d7e81962-a190-4650-97f2-c0a40bebebe8/util/0.log" Jan 23 08:45:23 crc kubenswrapper[4860]: I0123 08:45:23.057247 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l_d7e81962-a190-4650-97f2-c0a40bebebe8/extract/0.log" Jan 23 08:45:23 crc kubenswrapper[4860]: I0123 08:45:23.067621 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08lgr9l_d7e81962-a190-4650-97f2-c0a40bebebe8/pull/0.log" Jan 23 08:45:23 crc kubenswrapper[4860]: I0123 08:45:23.225460 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8lnlb_308edb1c-e0d1-4e4f-8452-937ce8fd192f/extract-utilities/0.log" Jan 23 08:45:23 crc kubenswrapper[4860]: I0123 08:45:23.374674 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8lnlb_308edb1c-e0d1-4e4f-8452-937ce8fd192f/extract-utilities/0.log" Jan 23 08:45:23 crc kubenswrapper[4860]: I0123 08:45:23.397294 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8lnlb_308edb1c-e0d1-4e4f-8452-937ce8fd192f/extract-content/0.log" Jan 23 08:45:23 crc kubenswrapper[4860]: I0123 08:45:23.404558 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8lnlb_308edb1c-e0d1-4e4f-8452-937ce8fd192f/extract-content/0.log" Jan 23 08:45:23 crc kubenswrapper[4860]: I0123 08:45:23.586924 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8lnlb_308edb1c-e0d1-4e4f-8452-937ce8fd192f/extract-utilities/0.log" Jan 23 08:45:23 crc kubenswrapper[4860]: I0123 08:45:23.590731 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8lnlb_308edb1c-e0d1-4e4f-8452-937ce8fd192f/extract-content/0.log" Jan 23 08:45:23 crc kubenswrapper[4860]: I0123 08:45:23.791863 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8lnlb_308edb1c-e0d1-4e4f-8452-937ce8fd192f/registry-server/0.log" Jan 23 08:45:23 crc kubenswrapper[4860]: I0123 08:45:23.804910 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ct7q8_87d48386-02fa-481d-81a2-0d96e1b4dd5b/extract-utilities/0.log" Jan 23 08:45:23 crc kubenswrapper[4860]: I0123 08:45:23.968390 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ct7q8_87d48386-02fa-481d-81a2-0d96e1b4dd5b/extract-utilities/0.log" Jan 23 08:45:23 crc kubenswrapper[4860]: I0123 08:45:23.976080 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ct7q8_87d48386-02fa-481d-81a2-0d96e1b4dd5b/extract-content/0.log" Jan 23 08:45:23 crc kubenswrapper[4860]: I0123 08:45:23.985035 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ct7q8_87d48386-02fa-481d-81a2-0d96e1b4dd5b/extract-content/0.log" Jan 23 08:45:24 crc kubenswrapper[4860]: I0123 08:45:24.168277 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ct7q8_87d48386-02fa-481d-81a2-0d96e1b4dd5b/extract-utilities/0.log" Jan 23 08:45:24 crc kubenswrapper[4860]: I0123 08:45:24.185637 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ct7q8_87d48386-02fa-481d-81a2-0d96e1b4dd5b/extract-content/0.log" Jan 23 08:45:24 crc kubenswrapper[4860]: I0123 08:45:24.331304 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-qrbrs_25ce68cb-8937-4377-bdad-80b09dad889c/marketplace-operator/0.log" Jan 23 08:45:24 crc kubenswrapper[4860]: I0123 08:45:24.441747 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dp8mj_2afd08be-90e1-4877-8ef2-249d867ad2c6/extract-utilities/0.log" Jan 23 08:45:24 crc kubenswrapper[4860]: I0123 08:45:24.462693 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ct7q8_87d48386-02fa-481d-81a2-0d96e1b4dd5b/registry-server/0.log" Jan 23 08:45:24 crc kubenswrapper[4860]: I0123 08:45:24.622656 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dp8mj_2afd08be-90e1-4877-8ef2-249d867ad2c6/extract-utilities/0.log" Jan 23 08:45:24 crc kubenswrapper[4860]: I0123 08:45:24.643468 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dp8mj_2afd08be-90e1-4877-8ef2-249d867ad2c6/extract-content/0.log" Jan 23 08:45:24 crc kubenswrapper[4860]: I0123 08:45:24.672940 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dp8mj_2afd08be-90e1-4877-8ef2-249d867ad2c6/extract-content/0.log" Jan 23 08:45:24 crc kubenswrapper[4860]: I0123 08:45:24.785795 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dp8mj_2afd08be-90e1-4877-8ef2-249d867ad2c6/extract-content/0.log" Jan 23 08:45:24 crc kubenswrapper[4860]: I0123 08:45:24.808218 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dp8mj_2afd08be-90e1-4877-8ef2-249d867ad2c6/extract-utilities/0.log" Jan 23 08:45:25 crc kubenswrapper[4860]: I0123 08:45:25.074451 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dp8mj_2afd08be-90e1-4877-8ef2-249d867ad2c6/registry-server/0.log" Jan 23 08:45:33 crc kubenswrapper[4860]: I0123 08:45:33.662232 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:45:33 crc kubenswrapper[4860]: E0123 08:45:33.662869 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:45:35 crc kubenswrapper[4860]: I0123 08:45:35.629911 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-54d658468c-9jckz_d51c3c71-d19c-4e4c-82a1-bcf628c5fe8b/prometheus-operator-admission-webhook/0.log" Jan 23 08:45:35 crc kubenswrapper[4860]: I0123 08:45:35.633880 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-68wdp_5bd5ffe6-350f-4c69-bafa-f203a0c1e7bd/prometheus-operator/0.log" Jan 23 08:45:35 crc kubenswrapper[4860]: I0123 08:45:35.635174 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-54d658468c-qzmzn_1305c249-d5a2-488b-9c2c-2f528c3a0f49/prometheus-operator-admission-webhook/0.log" Jan 23 08:45:35 crc kubenswrapper[4860]: I0123 08:45:35.745639 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-zkz7p_ea9793c3-f700-496f-b5b3-330037b6323e/operator/0.log" Jan 23 08:45:35 crc kubenswrapper[4860]: I0123 08:45:35.811415 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-d9lkg_265698c3-44cd-419b-a5db-f35ec2ef8514/perses-operator/0.log" Jan 23 08:45:46 crc kubenswrapper[4860]: I0123 08:45:46.657744 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:45:46 crc kubenswrapper[4860]: E0123 08:45:46.658489 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:45:57 crc kubenswrapper[4860]: I0123 08:45:57.661871 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:45:57 crc kubenswrapper[4860]: E0123 08:45:57.662599 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:46:09 crc kubenswrapper[4860]: I0123 08:46:09.657534 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:46:09 crc kubenswrapper[4860]: E0123 08:46:09.658492 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:46:17 crc kubenswrapper[4860]: I0123 08:46:17.135959 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xn9h5"] Jan 23 08:46:17 crc kubenswrapper[4860]: E0123 08:46:17.137140 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc8ce857-df0b-46df-94be-64645e07b53c" containerName="collect-profiles" Jan 23 08:46:17 crc kubenswrapper[4860]: I0123 08:46:17.137172 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc8ce857-df0b-46df-94be-64645e07b53c" containerName="collect-profiles" Jan 23 08:46:17 crc kubenswrapper[4860]: I0123 08:46:17.137411 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc8ce857-df0b-46df-94be-64645e07b53c" containerName="collect-profiles" Jan 23 08:46:17 crc kubenswrapper[4860]: I0123 08:46:17.139133 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:17 crc kubenswrapper[4860]: I0123 08:46:17.154968 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xn9h5"] Jan 23 08:46:17 crc kubenswrapper[4860]: I0123 08:46:17.288091 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcj47\" (UniqueName: \"kubernetes.io/projected/2274ef31-0723-4cf1-884d-6de67800f713-kube-api-access-lcj47\") pod \"certified-operators-xn9h5\" (UID: \"2274ef31-0723-4cf1-884d-6de67800f713\") " pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:17 crc kubenswrapper[4860]: I0123 08:46:17.288248 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2274ef31-0723-4cf1-884d-6de67800f713-catalog-content\") pod \"certified-operators-xn9h5\" (UID: \"2274ef31-0723-4cf1-884d-6de67800f713\") " pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:17 crc kubenswrapper[4860]: I0123 08:46:17.288288 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2274ef31-0723-4cf1-884d-6de67800f713-utilities\") pod \"certified-operators-xn9h5\" (UID: \"2274ef31-0723-4cf1-884d-6de67800f713\") " pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:17 crc kubenswrapper[4860]: I0123 08:46:17.389501 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2274ef31-0723-4cf1-884d-6de67800f713-utilities\") pod \"certified-operators-xn9h5\" (UID: \"2274ef31-0723-4cf1-884d-6de67800f713\") " pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:17 crc kubenswrapper[4860]: I0123 08:46:17.389590 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcj47\" (UniqueName: \"kubernetes.io/projected/2274ef31-0723-4cf1-884d-6de67800f713-kube-api-access-lcj47\") pod \"certified-operators-xn9h5\" (UID: \"2274ef31-0723-4cf1-884d-6de67800f713\") " pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:17 crc kubenswrapper[4860]: I0123 08:46:17.389671 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2274ef31-0723-4cf1-884d-6de67800f713-catalog-content\") pod \"certified-operators-xn9h5\" (UID: \"2274ef31-0723-4cf1-884d-6de67800f713\") " pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:17 crc kubenswrapper[4860]: I0123 08:46:17.390450 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2274ef31-0723-4cf1-884d-6de67800f713-utilities\") pod \"certified-operators-xn9h5\" (UID: \"2274ef31-0723-4cf1-884d-6de67800f713\") " pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:17 crc kubenswrapper[4860]: I0123 08:46:17.390475 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2274ef31-0723-4cf1-884d-6de67800f713-catalog-content\") pod \"certified-operators-xn9h5\" (UID: \"2274ef31-0723-4cf1-884d-6de67800f713\") " pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:17 crc kubenswrapper[4860]: I0123 08:46:17.417335 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcj47\" (UniqueName: \"kubernetes.io/projected/2274ef31-0723-4cf1-884d-6de67800f713-kube-api-access-lcj47\") pod \"certified-operators-xn9h5\" (UID: \"2274ef31-0723-4cf1-884d-6de67800f713\") " pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:17 crc kubenswrapper[4860]: I0123 08:46:17.462315 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:17 crc kubenswrapper[4860]: I0123 08:46:17.752873 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xn9h5"] Jan 23 08:46:18 crc kubenswrapper[4860]: I0123 08:46:18.628778 4860 generic.go:334] "Generic (PLEG): container finished" podID="2274ef31-0723-4cf1-884d-6de67800f713" containerID="bae66b15e6d29d7c24bca697767d6a214387d43ea9055671898514625d1b3726" exitCode=0 Jan 23 08:46:18 crc kubenswrapper[4860]: I0123 08:46:18.628867 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xn9h5" event={"ID":"2274ef31-0723-4cf1-884d-6de67800f713","Type":"ContainerDied","Data":"bae66b15e6d29d7c24bca697767d6a214387d43ea9055671898514625d1b3726"} Jan 23 08:46:18 crc kubenswrapper[4860]: I0123 08:46:18.631755 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xn9h5" event={"ID":"2274ef31-0723-4cf1-884d-6de67800f713","Type":"ContainerStarted","Data":"2b6065194e7dda8534016eda063a7de4a976f40306450d3aaa10a291ea0c6233"} Jan 23 08:46:18 crc kubenswrapper[4860]: I0123 08:46:18.630589 4860 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:46:20 crc kubenswrapper[4860]: I0123 08:46:20.664750 4860 generic.go:334] "Generic (PLEG): container finished" podID="2274ef31-0723-4cf1-884d-6de67800f713" containerID="6c461970c63ec10cfdadadc3126798959dc0b3c1a8dc915e4ee1a16299156628" exitCode=0 Jan 23 08:46:20 crc kubenswrapper[4860]: I0123 08:46:20.665095 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xn9h5" event={"ID":"2274ef31-0723-4cf1-884d-6de67800f713","Type":"ContainerDied","Data":"6c461970c63ec10cfdadadc3126798959dc0b3c1a8dc915e4ee1a16299156628"} Jan 23 08:46:21 crc kubenswrapper[4860]: I0123 08:46:21.658348 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:46:21 crc kubenswrapper[4860]: E0123 08:46:21.659043 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:46:22 crc kubenswrapper[4860]: I0123 08:46:22.682305 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xn9h5" event={"ID":"2274ef31-0723-4cf1-884d-6de67800f713","Type":"ContainerStarted","Data":"302d808e81699febd80a2627068b6b1847a4184ab0d2784a696a52031b2ae394"} Jan 23 08:46:23 crc kubenswrapper[4860]: I0123 08:46:23.708317 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xn9h5" podStartSLOduration=3.603304554 podStartE2EDuration="6.708298765s" podCreationTimestamp="2026-01-23 08:46:17 +0000 UTC" firstStartedPulling="2026-01-23 08:46:18.630319473 +0000 UTC m=+1885.258369658" lastFinishedPulling="2026-01-23 08:46:21.735313684 +0000 UTC m=+1888.363363869" observedRunningTime="2026-01-23 08:46:23.702705088 +0000 UTC m=+1890.330755273" watchObservedRunningTime="2026-01-23 08:46:23.708298765 +0000 UTC m=+1890.336348950" Jan 23 08:46:27 crc kubenswrapper[4860]: I0123 08:46:27.463079 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:27 crc kubenswrapper[4860]: I0123 08:46:27.463157 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:27 crc kubenswrapper[4860]: I0123 08:46:27.498914 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:27 crc kubenswrapper[4860]: I0123 08:46:27.760656 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:27 crc kubenswrapper[4860]: I0123 08:46:27.807077 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xn9h5"] Jan 23 08:46:29 crc kubenswrapper[4860]: I0123 08:46:29.731826 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xn9h5" podUID="2274ef31-0723-4cf1-884d-6de67800f713" containerName="registry-server" containerID="cri-o://302d808e81699febd80a2627068b6b1847a4184ab0d2784a696a52031b2ae394" gracePeriod=2 Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.160655 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.205286 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2274ef31-0723-4cf1-884d-6de67800f713-catalog-content\") pod \"2274ef31-0723-4cf1-884d-6de67800f713\" (UID: \"2274ef31-0723-4cf1-884d-6de67800f713\") " Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.205333 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcj47\" (UniqueName: \"kubernetes.io/projected/2274ef31-0723-4cf1-884d-6de67800f713-kube-api-access-lcj47\") pod \"2274ef31-0723-4cf1-884d-6de67800f713\" (UID: \"2274ef31-0723-4cf1-884d-6de67800f713\") " Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.205388 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2274ef31-0723-4cf1-884d-6de67800f713-utilities\") pod \"2274ef31-0723-4cf1-884d-6de67800f713\" (UID: \"2274ef31-0723-4cf1-884d-6de67800f713\") " Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.206472 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2274ef31-0723-4cf1-884d-6de67800f713-utilities" (OuterVolumeSpecName: "utilities") pod "2274ef31-0723-4cf1-884d-6de67800f713" (UID: "2274ef31-0723-4cf1-884d-6de67800f713"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.211638 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2274ef31-0723-4cf1-884d-6de67800f713-kube-api-access-lcj47" (OuterVolumeSpecName: "kube-api-access-lcj47") pod "2274ef31-0723-4cf1-884d-6de67800f713" (UID: "2274ef31-0723-4cf1-884d-6de67800f713"). InnerVolumeSpecName "kube-api-access-lcj47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.249319 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2274ef31-0723-4cf1-884d-6de67800f713-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2274ef31-0723-4cf1-884d-6de67800f713" (UID: "2274ef31-0723-4cf1-884d-6de67800f713"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.307711 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcj47\" (UniqueName: \"kubernetes.io/projected/2274ef31-0723-4cf1-884d-6de67800f713-kube-api-access-lcj47\") on node \"crc\" DevicePath \"\"" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.307765 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2274ef31-0723-4cf1-884d-6de67800f713-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.307782 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2274ef31-0723-4cf1-884d-6de67800f713-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.743576 4860 generic.go:334] "Generic (PLEG): container finished" podID="056a76c7-1df2-4de3-b014-23b052d13ad8" containerID="0c2b06f015167ed8bae5e42da6d1c9e3d17f14bf2935cf5f2f25cde289ea4862" exitCode=0 Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.743714 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rnhjk/must-gather-lqnzc" event={"ID":"056a76c7-1df2-4de3-b014-23b052d13ad8","Type":"ContainerDied","Data":"0c2b06f015167ed8bae5e42da6d1c9e3d17f14bf2935cf5f2f25cde289ea4862"} Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.744744 4860 scope.go:117] "RemoveContainer" containerID="0c2b06f015167ed8bae5e42da6d1c9e3d17f14bf2935cf5f2f25cde289ea4862" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.747881 4860 generic.go:334] "Generic (PLEG): container finished" podID="2274ef31-0723-4cf1-884d-6de67800f713" containerID="302d808e81699febd80a2627068b6b1847a4184ab0d2784a696a52031b2ae394" exitCode=0 Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.747967 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xn9h5" event={"ID":"2274ef31-0723-4cf1-884d-6de67800f713","Type":"ContainerDied","Data":"302d808e81699febd80a2627068b6b1847a4184ab0d2784a696a52031b2ae394"} Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.748000 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xn9h5" event={"ID":"2274ef31-0723-4cf1-884d-6de67800f713","Type":"ContainerDied","Data":"2b6065194e7dda8534016eda063a7de4a976f40306450d3aaa10a291ea0c6233"} Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.748078 4860 scope.go:117] "RemoveContainer" containerID="302d808e81699febd80a2627068b6b1847a4184ab0d2784a696a52031b2ae394" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.748243 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xn9h5" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.772868 4860 scope.go:117] "RemoveContainer" containerID="6c461970c63ec10cfdadadc3126798959dc0b3c1a8dc915e4ee1a16299156628" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.776907 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xn9h5"] Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.781680 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xn9h5"] Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.788741 4860 scope.go:117] "RemoveContainer" containerID="bae66b15e6d29d7c24bca697767d6a214387d43ea9055671898514625d1b3726" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.814117 4860 scope.go:117] "RemoveContainer" containerID="302d808e81699febd80a2627068b6b1847a4184ab0d2784a696a52031b2ae394" Jan 23 08:46:30 crc kubenswrapper[4860]: E0123 08:46:30.814581 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"302d808e81699febd80a2627068b6b1847a4184ab0d2784a696a52031b2ae394\": container with ID starting with 302d808e81699febd80a2627068b6b1847a4184ab0d2784a696a52031b2ae394 not found: ID does not exist" containerID="302d808e81699febd80a2627068b6b1847a4184ab0d2784a696a52031b2ae394" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.814624 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"302d808e81699febd80a2627068b6b1847a4184ab0d2784a696a52031b2ae394"} err="failed to get container status \"302d808e81699febd80a2627068b6b1847a4184ab0d2784a696a52031b2ae394\": rpc error: code = NotFound desc = could not find container \"302d808e81699febd80a2627068b6b1847a4184ab0d2784a696a52031b2ae394\": container with ID starting with 302d808e81699febd80a2627068b6b1847a4184ab0d2784a696a52031b2ae394 not found: ID does not exist" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.814645 4860 scope.go:117] "RemoveContainer" containerID="6c461970c63ec10cfdadadc3126798959dc0b3c1a8dc915e4ee1a16299156628" Jan 23 08:46:30 crc kubenswrapper[4860]: E0123 08:46:30.815200 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c461970c63ec10cfdadadc3126798959dc0b3c1a8dc915e4ee1a16299156628\": container with ID starting with 6c461970c63ec10cfdadadc3126798959dc0b3c1a8dc915e4ee1a16299156628 not found: ID does not exist" containerID="6c461970c63ec10cfdadadc3126798959dc0b3c1a8dc915e4ee1a16299156628" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.815226 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c461970c63ec10cfdadadc3126798959dc0b3c1a8dc915e4ee1a16299156628"} err="failed to get container status \"6c461970c63ec10cfdadadc3126798959dc0b3c1a8dc915e4ee1a16299156628\": rpc error: code = NotFound desc = could not find container \"6c461970c63ec10cfdadadc3126798959dc0b3c1a8dc915e4ee1a16299156628\": container with ID starting with 6c461970c63ec10cfdadadc3126798959dc0b3c1a8dc915e4ee1a16299156628 not found: ID does not exist" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.815242 4860 scope.go:117] "RemoveContainer" containerID="bae66b15e6d29d7c24bca697767d6a214387d43ea9055671898514625d1b3726" Jan 23 08:46:30 crc kubenswrapper[4860]: E0123 08:46:30.815606 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bae66b15e6d29d7c24bca697767d6a214387d43ea9055671898514625d1b3726\": container with ID starting with bae66b15e6d29d7c24bca697767d6a214387d43ea9055671898514625d1b3726 not found: ID does not exist" containerID="bae66b15e6d29d7c24bca697767d6a214387d43ea9055671898514625d1b3726" Jan 23 08:46:30 crc kubenswrapper[4860]: I0123 08:46:30.815632 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bae66b15e6d29d7c24bca697767d6a214387d43ea9055671898514625d1b3726"} err="failed to get container status \"bae66b15e6d29d7c24bca697767d6a214387d43ea9055671898514625d1b3726\": rpc error: code = NotFound desc = could not find container \"bae66b15e6d29d7c24bca697767d6a214387d43ea9055671898514625d1b3726\": container with ID starting with bae66b15e6d29d7c24bca697767d6a214387d43ea9055671898514625d1b3726 not found: ID does not exist" Jan 23 08:46:31 crc kubenswrapper[4860]: I0123 08:46:31.546961 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rnhjk_must-gather-lqnzc_056a76c7-1df2-4de3-b014-23b052d13ad8/gather/0.log" Jan 23 08:46:31 crc kubenswrapper[4860]: I0123 08:46:31.665695 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2274ef31-0723-4cf1-884d-6de67800f713" path="/var/lib/kubelet/pods/2274ef31-0723-4cf1-884d-6de67800f713/volumes" Jan 23 08:46:34 crc kubenswrapper[4860]: I0123 08:46:34.658385 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:46:34 crc kubenswrapper[4860]: E0123 08:46:34.659134 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.112601 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rnhjk/must-gather-lqnzc"] Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.113564 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rnhjk/must-gather-lqnzc" podUID="056a76c7-1df2-4de3-b014-23b052d13ad8" containerName="copy" containerID="cri-o://647e457640627176188c0fd4801150966b439dc9719f21e48aca7df8a0830883" gracePeriod=2 Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.117754 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rnhjk/must-gather-lqnzc"] Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.623442 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rnhjk_must-gather-lqnzc_056a76c7-1df2-4de3-b014-23b052d13ad8/copy/0.log" Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.624310 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rnhjk/must-gather-lqnzc" Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.777943 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxx8g\" (UniqueName: \"kubernetes.io/projected/056a76c7-1df2-4de3-b014-23b052d13ad8-kube-api-access-lxx8g\") pod \"056a76c7-1df2-4de3-b014-23b052d13ad8\" (UID: \"056a76c7-1df2-4de3-b014-23b052d13ad8\") " Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.778087 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/056a76c7-1df2-4de3-b014-23b052d13ad8-must-gather-output\") pod \"056a76c7-1df2-4de3-b014-23b052d13ad8\" (UID: \"056a76c7-1df2-4de3-b014-23b052d13ad8\") " Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.784357 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056a76c7-1df2-4de3-b014-23b052d13ad8-kube-api-access-lxx8g" (OuterVolumeSpecName: "kube-api-access-lxx8g") pod "056a76c7-1df2-4de3-b014-23b052d13ad8" (UID: "056a76c7-1df2-4de3-b014-23b052d13ad8"). InnerVolumeSpecName "kube-api-access-lxx8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.820188 4860 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rnhjk_must-gather-lqnzc_056a76c7-1df2-4de3-b014-23b052d13ad8/copy/0.log" Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.820923 4860 generic.go:334] "Generic (PLEG): container finished" podID="056a76c7-1df2-4de3-b014-23b052d13ad8" containerID="647e457640627176188c0fd4801150966b439dc9719f21e48aca7df8a0830883" exitCode=143 Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.820987 4860 scope.go:117] "RemoveContainer" containerID="647e457640627176188c0fd4801150966b439dc9719f21e48aca7df8a0830883" Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.821186 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rnhjk/must-gather-lqnzc" Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.836653 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/056a76c7-1df2-4de3-b014-23b052d13ad8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "056a76c7-1df2-4de3-b014-23b052d13ad8" (UID: "056a76c7-1df2-4de3-b014-23b052d13ad8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.841184 4860 scope.go:117] "RemoveContainer" containerID="0c2b06f015167ed8bae5e42da6d1c9e3d17f14bf2935cf5f2f25cde289ea4862" Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.880218 4860 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/056a76c7-1df2-4de3-b014-23b052d13ad8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.880261 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxx8g\" (UniqueName: \"kubernetes.io/projected/056a76c7-1df2-4de3-b014-23b052d13ad8-kube-api-access-lxx8g\") on node \"crc\" DevicePath \"\"" Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.882390 4860 scope.go:117] "RemoveContainer" containerID="647e457640627176188c0fd4801150966b439dc9719f21e48aca7df8a0830883" Jan 23 08:46:39 crc kubenswrapper[4860]: E0123 08:46:39.883643 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647e457640627176188c0fd4801150966b439dc9719f21e48aca7df8a0830883\": container with ID starting with 647e457640627176188c0fd4801150966b439dc9719f21e48aca7df8a0830883 not found: ID does not exist" containerID="647e457640627176188c0fd4801150966b439dc9719f21e48aca7df8a0830883" Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.883689 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647e457640627176188c0fd4801150966b439dc9719f21e48aca7df8a0830883"} err="failed to get container status \"647e457640627176188c0fd4801150966b439dc9719f21e48aca7df8a0830883\": rpc error: code = NotFound desc = could not find container \"647e457640627176188c0fd4801150966b439dc9719f21e48aca7df8a0830883\": container with ID starting with 647e457640627176188c0fd4801150966b439dc9719f21e48aca7df8a0830883 not found: ID does not exist" Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.883714 4860 scope.go:117] "RemoveContainer" containerID="0c2b06f015167ed8bae5e42da6d1c9e3d17f14bf2935cf5f2f25cde289ea4862" Jan 23 08:46:39 crc kubenswrapper[4860]: E0123 08:46:39.884120 4860 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c2b06f015167ed8bae5e42da6d1c9e3d17f14bf2935cf5f2f25cde289ea4862\": container with ID starting with 0c2b06f015167ed8bae5e42da6d1c9e3d17f14bf2935cf5f2f25cde289ea4862 not found: ID does not exist" containerID="0c2b06f015167ed8bae5e42da6d1c9e3d17f14bf2935cf5f2f25cde289ea4862" Jan 23 08:46:39 crc kubenswrapper[4860]: I0123 08:46:39.884172 4860 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2b06f015167ed8bae5e42da6d1c9e3d17f14bf2935cf5f2f25cde289ea4862"} err="failed to get container status \"0c2b06f015167ed8bae5e42da6d1c9e3d17f14bf2935cf5f2f25cde289ea4862\": rpc error: code = NotFound desc = could not find container \"0c2b06f015167ed8bae5e42da6d1c9e3d17f14bf2935cf5f2f25cde289ea4862\": container with ID starting with 0c2b06f015167ed8bae5e42da6d1c9e3d17f14bf2935cf5f2f25cde289ea4862 not found: ID does not exist" Jan 23 08:46:41 crc kubenswrapper[4860]: I0123 08:46:41.664709 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056a76c7-1df2-4de3-b014-23b052d13ad8" path="/var/lib/kubelet/pods/056a76c7-1df2-4de3-b014-23b052d13ad8/volumes" Jan 23 08:46:47 crc kubenswrapper[4860]: I0123 08:46:47.658477 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:46:47 crc kubenswrapper[4860]: E0123 08:46:47.659229 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:47:02 crc kubenswrapper[4860]: I0123 08:47:02.657668 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:47:02 crc kubenswrapper[4860]: E0123 08:47:02.658529 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:47:14 crc kubenswrapper[4860]: I0123 08:47:14.657593 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:47:14 crc kubenswrapper[4860]: E0123 08:47:14.658277 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:47:14 crc kubenswrapper[4860]: I0123 08:47:14.819331 4860 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pnk6v"] Jan 23 08:47:14 crc kubenswrapper[4860]: E0123 08:47:14.819650 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2274ef31-0723-4cf1-884d-6de67800f713" containerName="registry-server" Jan 23 08:47:14 crc kubenswrapper[4860]: I0123 08:47:14.819668 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2274ef31-0723-4cf1-884d-6de67800f713" containerName="registry-server" Jan 23 08:47:14 crc kubenswrapper[4860]: E0123 08:47:14.819691 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056a76c7-1df2-4de3-b014-23b052d13ad8" containerName="copy" Jan 23 08:47:14 crc kubenswrapper[4860]: I0123 08:47:14.819697 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="056a76c7-1df2-4de3-b014-23b052d13ad8" containerName="copy" Jan 23 08:47:14 crc kubenswrapper[4860]: E0123 08:47:14.819707 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2274ef31-0723-4cf1-884d-6de67800f713" containerName="extract-content" Jan 23 08:47:14 crc kubenswrapper[4860]: I0123 08:47:14.819713 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2274ef31-0723-4cf1-884d-6de67800f713" containerName="extract-content" Jan 23 08:47:14 crc kubenswrapper[4860]: E0123 08:47:14.819725 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056a76c7-1df2-4de3-b014-23b052d13ad8" containerName="gather" Jan 23 08:47:14 crc kubenswrapper[4860]: I0123 08:47:14.819731 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="056a76c7-1df2-4de3-b014-23b052d13ad8" containerName="gather" Jan 23 08:47:14 crc kubenswrapper[4860]: E0123 08:47:14.819744 4860 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2274ef31-0723-4cf1-884d-6de67800f713" containerName="extract-utilities" Jan 23 08:47:14 crc kubenswrapper[4860]: I0123 08:47:14.819749 4860 state_mem.go:107] "Deleted CPUSet assignment" podUID="2274ef31-0723-4cf1-884d-6de67800f713" containerName="extract-utilities" Jan 23 08:47:14 crc kubenswrapper[4860]: I0123 08:47:14.819859 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="056a76c7-1df2-4de3-b014-23b052d13ad8" containerName="gather" Jan 23 08:47:14 crc kubenswrapper[4860]: I0123 08:47:14.819872 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="056a76c7-1df2-4de3-b014-23b052d13ad8" containerName="copy" Jan 23 08:47:14 crc kubenswrapper[4860]: I0123 08:47:14.819885 4860 memory_manager.go:354] "RemoveStaleState removing state" podUID="2274ef31-0723-4cf1-884d-6de67800f713" containerName="registry-server" Jan 23 08:47:14 crc kubenswrapper[4860]: I0123 08:47:14.820922 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:14 crc kubenswrapper[4860]: I0123 08:47:14.834968 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pnk6v"] Jan 23 08:47:14 crc kubenswrapper[4860]: I0123 08:47:14.919221 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-catalog-content\") pod \"redhat-operators-pnk6v\" (UID: \"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d\") " pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:14 crc kubenswrapper[4860]: I0123 08:47:14.919338 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-utilities\") pod \"redhat-operators-pnk6v\" (UID: \"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d\") " pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:14 crc kubenswrapper[4860]: I0123 08:47:14.919387 4860 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdwgr\" (UniqueName: \"kubernetes.io/projected/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-kube-api-access-jdwgr\") pod \"redhat-operators-pnk6v\" (UID: \"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d\") " pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:15 crc kubenswrapper[4860]: I0123 08:47:15.021304 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-catalog-content\") pod \"redhat-operators-pnk6v\" (UID: \"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d\") " pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:15 crc kubenswrapper[4860]: I0123 08:47:15.021854 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-catalog-content\") pod \"redhat-operators-pnk6v\" (UID: \"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d\") " pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:15 crc kubenswrapper[4860]: I0123 08:47:15.022036 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-utilities\") pod \"redhat-operators-pnk6v\" (UID: \"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d\") " pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:15 crc kubenswrapper[4860]: I0123 08:47:15.022100 4860 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdwgr\" (UniqueName: \"kubernetes.io/projected/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-kube-api-access-jdwgr\") pod \"redhat-operators-pnk6v\" (UID: \"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d\") " pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:15 crc kubenswrapper[4860]: I0123 08:47:15.022636 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-utilities\") pod \"redhat-operators-pnk6v\" (UID: \"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d\") " pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:15 crc kubenswrapper[4860]: I0123 08:47:15.040885 4860 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdwgr\" (UniqueName: \"kubernetes.io/projected/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-kube-api-access-jdwgr\") pod \"redhat-operators-pnk6v\" (UID: \"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d\") " pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:15 crc kubenswrapper[4860]: I0123 08:47:15.140564 4860 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:15 crc kubenswrapper[4860]: I0123 08:47:15.377480 4860 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pnk6v"] Jan 23 08:47:16 crc kubenswrapper[4860]: I0123 08:47:16.079234 4860 generic.go:334] "Generic (PLEG): container finished" podID="f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d" containerID="b31614531d9b0eea24f8aca4207e0430a031606d20cc0fc7bf49b429294943d2" exitCode=0 Jan 23 08:47:16 crc kubenswrapper[4860]: I0123 08:47:16.079392 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnk6v" event={"ID":"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d","Type":"ContainerDied","Data":"b31614531d9b0eea24f8aca4207e0430a031606d20cc0fc7bf49b429294943d2"} Jan 23 08:47:16 crc kubenswrapper[4860]: I0123 08:47:16.079491 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnk6v" event={"ID":"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d","Type":"ContainerStarted","Data":"c8028734e4f510291a89c18b8f2e68117c6e30a948b46b2394e0c931aec20002"} Jan 23 08:47:17 crc kubenswrapper[4860]: I0123 08:47:17.087515 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnk6v" event={"ID":"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d","Type":"ContainerStarted","Data":"60fc04384def7664344bce2df15d304fc486966c142570208135697b407c0004"} Jan 23 08:47:18 crc kubenswrapper[4860]: I0123 08:47:18.096512 4860 generic.go:334] "Generic (PLEG): container finished" podID="f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d" containerID="60fc04384def7664344bce2df15d304fc486966c142570208135697b407c0004" exitCode=0 Jan 23 08:47:18 crc kubenswrapper[4860]: I0123 08:47:18.096564 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnk6v" event={"ID":"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d","Type":"ContainerDied","Data":"60fc04384def7664344bce2df15d304fc486966c142570208135697b407c0004"} Jan 23 08:47:19 crc kubenswrapper[4860]: I0123 08:47:19.116832 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnk6v" event={"ID":"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d","Type":"ContainerStarted","Data":"dc21ac3e00c7615d4f8965cd4257f16c53ca3c1dc3d67fef482faa16865177f6"} Jan 23 08:47:19 crc kubenswrapper[4860]: I0123 08:47:19.133763 4860 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pnk6v" podStartSLOduration=2.546852211 podStartE2EDuration="5.133745029s" podCreationTimestamp="2026-01-23 08:47:14 +0000 UTC" firstStartedPulling="2026-01-23 08:47:16.080956251 +0000 UTC m=+1942.709006436" lastFinishedPulling="2026-01-23 08:47:18.667849069 +0000 UTC m=+1945.295899254" observedRunningTime="2026-01-23 08:47:19.13091905 +0000 UTC m=+1945.758969255" watchObservedRunningTime="2026-01-23 08:47:19.133745029 +0000 UTC m=+1945.761795214" Jan 23 08:47:25 crc kubenswrapper[4860]: I0123 08:47:25.141372 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:25 crc kubenswrapper[4860]: I0123 08:47:25.141917 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:25 crc kubenswrapper[4860]: I0123 08:47:25.207988 4860 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:25 crc kubenswrapper[4860]: I0123 08:47:25.254980 4860 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:25 crc kubenswrapper[4860]: I0123 08:47:25.443091 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pnk6v"] Jan 23 08:47:26 crc kubenswrapper[4860]: I0123 08:47:26.658455 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:47:26 crc kubenswrapper[4860]: E0123 08:47:26.658715 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:47:27 crc kubenswrapper[4860]: I0123 08:47:27.172194 4860 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pnk6v" podUID="f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d" containerName="registry-server" containerID="cri-o://dc21ac3e00c7615d4f8965cd4257f16c53ca3c1dc3d67fef482faa16865177f6" gracePeriod=2 Jan 23 08:47:30 crc kubenswrapper[4860]: I0123 08:47:30.196242 4860 generic.go:334] "Generic (PLEG): container finished" podID="f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d" containerID="dc21ac3e00c7615d4f8965cd4257f16c53ca3c1dc3d67fef482faa16865177f6" exitCode=0 Jan 23 08:47:30 crc kubenswrapper[4860]: I0123 08:47:30.196296 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnk6v" event={"ID":"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d","Type":"ContainerDied","Data":"dc21ac3e00c7615d4f8965cd4257f16c53ca3c1dc3d67fef482faa16865177f6"} Jan 23 08:47:30 crc kubenswrapper[4860]: I0123 08:47:30.636676 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:30 crc kubenswrapper[4860]: I0123 08:47:30.747964 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-catalog-content\") pod \"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d\" (UID: \"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d\") " Jan 23 08:47:30 crc kubenswrapper[4860]: I0123 08:47:30.748105 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdwgr\" (UniqueName: \"kubernetes.io/projected/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-kube-api-access-jdwgr\") pod \"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d\" (UID: \"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d\") " Jan 23 08:47:30 crc kubenswrapper[4860]: I0123 08:47:30.748175 4860 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-utilities\") pod \"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d\" (UID: \"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d\") " Jan 23 08:47:30 crc kubenswrapper[4860]: I0123 08:47:30.749309 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-utilities" (OuterVolumeSpecName: "utilities") pod "f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d" (UID: "f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:47:30 crc kubenswrapper[4860]: I0123 08:47:30.756307 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-kube-api-access-jdwgr" (OuterVolumeSpecName: "kube-api-access-jdwgr") pod "f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d" (UID: "f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d"). InnerVolumeSpecName "kube-api-access-jdwgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:47:30 crc kubenswrapper[4860]: I0123 08:47:30.849752 4860 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdwgr\" (UniqueName: \"kubernetes.io/projected/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-kube-api-access-jdwgr\") on node \"crc\" DevicePath \"\"" Jan 23 08:47:30 crc kubenswrapper[4860]: I0123 08:47:30.849811 4860 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:47:30 crc kubenswrapper[4860]: I0123 08:47:30.882692 4860 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d" (UID: "f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:47:30 crc kubenswrapper[4860]: I0123 08:47:30.950605 4860 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:47:31 crc kubenswrapper[4860]: I0123 08:47:31.205340 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnk6v" event={"ID":"f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d","Type":"ContainerDied","Data":"c8028734e4f510291a89c18b8f2e68117c6e30a948b46b2394e0c931aec20002"} Jan 23 08:47:31 crc kubenswrapper[4860]: I0123 08:47:31.205397 4860 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pnk6v" Jan 23 08:47:31 crc kubenswrapper[4860]: I0123 08:47:31.205400 4860 scope.go:117] "RemoveContainer" containerID="dc21ac3e00c7615d4f8965cd4257f16c53ca3c1dc3d67fef482faa16865177f6" Jan 23 08:47:31 crc kubenswrapper[4860]: I0123 08:47:31.223913 4860 scope.go:117] "RemoveContainer" containerID="60fc04384def7664344bce2df15d304fc486966c142570208135697b407c0004" Jan 23 08:47:31 crc kubenswrapper[4860]: I0123 08:47:31.235766 4860 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pnk6v"] Jan 23 08:47:31 crc kubenswrapper[4860]: I0123 08:47:31.241438 4860 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pnk6v"] Jan 23 08:47:31 crc kubenswrapper[4860]: I0123 08:47:31.267284 4860 scope.go:117] "RemoveContainer" containerID="b31614531d9b0eea24f8aca4207e0430a031606d20cc0fc7bf49b429294943d2" Jan 23 08:47:31 crc kubenswrapper[4860]: I0123 08:47:31.677618 4860 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d" path="/var/lib/kubelet/pods/f003ce7e-6cc2-4cdb-b98c-5b75cd186c5d/volumes" Jan 23 08:47:41 crc kubenswrapper[4860]: I0123 08:47:41.657509 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:47:41 crc kubenswrapper[4860]: E0123 08:47:41.658297 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:47:55 crc kubenswrapper[4860]: I0123 08:47:55.658146 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:47:55 crc kubenswrapper[4860]: E0123 08:47:55.658862 4860 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-tk8df_openshift-machine-config-operator(081dccf3-546f-41d3-bd98-ce1b0bbe037e)\"" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" podUID="081dccf3-546f-41d3-bd98-ce1b0bbe037e" Jan 23 08:48:08 crc kubenswrapper[4860]: I0123 08:48:08.657635 4860 scope.go:117] "RemoveContainer" containerID="0d0053ce5fc722f49f2ebe5e413d114a04a40f13adb350c3beb35135cfd86b47" Jan 23 08:48:10 crc kubenswrapper[4860]: I0123 08:48:10.473199 4860 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tk8df" event={"ID":"081dccf3-546f-41d3-bd98-ce1b0bbe037e","Type":"ContainerStarted","Data":"5f9d63521e0f88bce9b922c206062b6498d018bb227f07deb3ce1be68c475998"}