Dec 13 06:54:50 crc systemd[1]: Starting Kubernetes Kubelet... Dec 13 06:54:50 crc restorecon[4585]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 06:54:50 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 06:54:51 crc restorecon[4585]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 13 06:54:51 crc restorecon[4585]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 13 06:54:51 crc kubenswrapper[4707]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:54:51 crc kubenswrapper[4707]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 13 06:54:51 crc kubenswrapper[4707]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:54:51 crc kubenswrapper[4707]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:54:51 crc kubenswrapper[4707]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 06:54:51 crc kubenswrapper[4707]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.597586 4707 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602729 4707 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602771 4707 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602782 4707 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602791 4707 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602799 4707 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602809 4707 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602818 4707 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602826 4707 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602834 4707 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602842 4707 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602850 4707 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602858 4707 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602866 4707 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602874 4707 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602882 4707 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602890 4707 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602899 4707 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602907 4707 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602915 4707 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602923 4707 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602935 4707 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602945 4707 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602985 4707 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.602995 4707 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603004 4707 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603012 4707 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603023 4707 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603034 4707 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603043 4707 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603052 4707 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603061 4707 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603071 4707 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603079 4707 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603088 4707 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603096 4707 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603117 4707 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603129 4707 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603137 4707 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603146 4707 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603154 4707 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603162 4707 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603170 4707 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603178 4707 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603188 4707 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603196 4707 feature_gate.go:330] unrecognized feature gate: Example Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603205 4707 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603213 4707 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603221 4707 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603229 4707 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603237 4707 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603245 4707 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603253 4707 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603262 4707 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603273 4707 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603282 4707 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603290 4707 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603298 4707 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603307 4707 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603316 4707 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603324 4707 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603332 4707 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603340 4707 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603348 4707 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603356 4707 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603364 4707 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603372 4707 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603381 4707 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603391 4707 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603398 4707 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603406 4707 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.603415 4707 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603581 4707 flags.go:64] FLAG: --address="0.0.0.0" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603609 4707 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603628 4707 flags.go:64] FLAG: --anonymous-auth="true" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603643 4707 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603658 4707 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603671 4707 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603687 4707 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603703 4707 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603716 4707 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603728 4707 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603741 4707 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603753 4707 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603765 4707 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603777 4707 flags.go:64] FLAG: --cgroup-root="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603787 4707 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603796 4707 flags.go:64] FLAG: --client-ca-file="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603805 4707 flags.go:64] FLAG: --cloud-config="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603815 4707 flags.go:64] FLAG: --cloud-provider="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603826 4707 flags.go:64] FLAG: --cluster-dns="[]" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603844 4707 flags.go:64] FLAG: --cluster-domain="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603855 4707 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603932 4707 flags.go:64] FLAG: --config-dir="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603944 4707 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603976 4707 flags.go:64] FLAG: --container-log-max-files="5" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603989 4707 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.603999 4707 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604010 4707 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604020 4707 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604033 4707 flags.go:64] FLAG: --contention-profiling="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604043 4707 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604054 4707 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604066 4707 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604076 4707 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604106 4707 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604116 4707 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604126 4707 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604136 4707 flags.go:64] FLAG: --enable-load-reader="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604146 4707 flags.go:64] FLAG: --enable-server="true" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604157 4707 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604169 4707 flags.go:64] FLAG: --event-burst="100" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604179 4707 flags.go:64] FLAG: --event-qps="50" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604191 4707 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604202 4707 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604212 4707 flags.go:64] FLAG: --eviction-hard="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604224 4707 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604234 4707 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604244 4707 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604256 4707 flags.go:64] FLAG: --eviction-soft="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604266 4707 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604276 4707 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604287 4707 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604297 4707 flags.go:64] FLAG: --experimental-mounter-path="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604307 4707 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604317 4707 flags.go:64] FLAG: --fail-swap-on="true" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604328 4707 flags.go:64] FLAG: --feature-gates="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604339 4707 flags.go:64] FLAG: --file-check-frequency="20s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604349 4707 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604360 4707 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604370 4707 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604381 4707 flags.go:64] FLAG: --healthz-port="10248" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604391 4707 flags.go:64] FLAG: --help="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604401 4707 flags.go:64] FLAG: --hostname-override="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604411 4707 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604422 4707 flags.go:64] FLAG: --http-check-frequency="20s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604432 4707 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604442 4707 flags.go:64] FLAG: --image-credential-provider-config="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604452 4707 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604462 4707 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604472 4707 flags.go:64] FLAG: --image-service-endpoint="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604481 4707 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604492 4707 flags.go:64] FLAG: --kube-api-burst="100" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604502 4707 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604512 4707 flags.go:64] FLAG: --kube-api-qps="50" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604523 4707 flags.go:64] FLAG: --kube-reserved="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604533 4707 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604543 4707 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604553 4707 flags.go:64] FLAG: --kubelet-cgroups="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604564 4707 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604574 4707 flags.go:64] FLAG: --lock-file="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604584 4707 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604594 4707 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604604 4707 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604619 4707 flags.go:64] FLAG: --log-json-split-stream="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604631 4707 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604641 4707 flags.go:64] FLAG: --log-text-split-stream="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604652 4707 flags.go:64] FLAG: --logging-format="text" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604662 4707 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604673 4707 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604684 4707 flags.go:64] FLAG: --manifest-url="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604694 4707 flags.go:64] FLAG: --manifest-url-header="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604709 4707 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604720 4707 flags.go:64] FLAG: --max-open-files="1000000" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604735 4707 flags.go:64] FLAG: --max-pods="110" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604747 4707 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604759 4707 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604771 4707 flags.go:64] FLAG: --memory-manager-policy="None" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604783 4707 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604795 4707 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604806 4707 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604817 4707 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604842 4707 flags.go:64] FLAG: --node-status-max-images="50" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604852 4707 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604861 4707 flags.go:64] FLAG: --oom-score-adj="-999" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604871 4707 flags.go:64] FLAG: --pod-cidr="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604879 4707 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604892 4707 flags.go:64] FLAG: --pod-manifest-path="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604901 4707 flags.go:64] FLAG: --pod-max-pids="-1" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604911 4707 flags.go:64] FLAG: --pods-per-core="0" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604920 4707 flags.go:64] FLAG: --port="10250" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604929 4707 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604939 4707 flags.go:64] FLAG: --provider-id="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.604979 4707 flags.go:64] FLAG: --qos-reserved="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605005 4707 flags.go:64] FLAG: --read-only-port="10255" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605020 4707 flags.go:64] FLAG: --register-node="true" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605032 4707 flags.go:64] FLAG: --register-schedulable="true" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605044 4707 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605065 4707 flags.go:64] FLAG: --registry-burst="10" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605077 4707 flags.go:64] FLAG: --registry-qps="5" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605088 4707 flags.go:64] FLAG: --reserved-cpus="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605103 4707 flags.go:64] FLAG: --reserved-memory="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605117 4707 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605129 4707 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605141 4707 flags.go:64] FLAG: --rotate-certificates="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605152 4707 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605164 4707 flags.go:64] FLAG: --runonce="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605176 4707 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605190 4707 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605202 4707 flags.go:64] FLAG: --seccomp-default="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605213 4707 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605225 4707 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605237 4707 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605248 4707 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605261 4707 flags.go:64] FLAG: --storage-driver-password="root" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605272 4707 flags.go:64] FLAG: --storage-driver-secure="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605283 4707 flags.go:64] FLAG: --storage-driver-table="stats" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605294 4707 flags.go:64] FLAG: --storage-driver-user="root" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605305 4707 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605319 4707 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605330 4707 flags.go:64] FLAG: --system-cgroups="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605341 4707 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605362 4707 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605373 4707 flags.go:64] FLAG: --tls-cert-file="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605385 4707 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605400 4707 flags.go:64] FLAG: --tls-min-version="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605411 4707 flags.go:64] FLAG: --tls-private-key-file="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605422 4707 flags.go:64] FLAG: --topology-manager-policy="none" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605434 4707 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605446 4707 flags.go:64] FLAG: --topology-manager-scope="container" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605458 4707 flags.go:64] FLAG: --v="2" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605474 4707 flags.go:64] FLAG: --version="false" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605487 4707 flags.go:64] FLAG: --vmodule="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605500 4707 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.605513 4707 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.605775 4707 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.605792 4707 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.605802 4707 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.605813 4707 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.605824 4707 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.605840 4707 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.605854 4707 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.605866 4707 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.605876 4707 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.605886 4707 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.605897 4707 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.605907 4707 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.605917 4707 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.605927 4707 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.605941 4707 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.605992 4707 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606012 4707 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606024 4707 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606034 4707 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606044 4707 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606054 4707 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606064 4707 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606073 4707 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606082 4707 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606090 4707 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606099 4707 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606108 4707 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606118 4707 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606129 4707 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606139 4707 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606150 4707 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606160 4707 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606170 4707 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606180 4707 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606190 4707 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606200 4707 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606213 4707 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606224 4707 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606237 4707 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606248 4707 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606259 4707 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606271 4707 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606284 4707 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606323 4707 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606334 4707 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606345 4707 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606356 4707 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606365 4707 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606444 4707 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606455 4707 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606466 4707 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606476 4707 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606487 4707 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606496 4707 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606506 4707 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606515 4707 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606524 4707 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606533 4707 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606542 4707 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606552 4707 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606562 4707 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606572 4707 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606583 4707 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606593 4707 feature_gate.go:330] unrecognized feature gate: Example Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606603 4707 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606613 4707 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606623 4707 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606634 4707 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606645 4707 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606655 4707 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.606665 4707 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.606696 4707 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.614462 4707 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.614509 4707 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614593 4707 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614603 4707 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614608 4707 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614612 4707 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614617 4707 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614621 4707 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614625 4707 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614630 4707 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614636 4707 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614642 4707 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614647 4707 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614653 4707 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614658 4707 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614663 4707 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614668 4707 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614673 4707 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614677 4707 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614681 4707 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614686 4707 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614689 4707 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614693 4707 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614697 4707 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614700 4707 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614705 4707 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614708 4707 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614712 4707 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614716 4707 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614721 4707 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614727 4707 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614731 4707 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614735 4707 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614739 4707 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614743 4707 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614747 4707 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614752 4707 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614757 4707 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614761 4707 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614766 4707 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614770 4707 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614775 4707 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614780 4707 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614786 4707 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614790 4707 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614794 4707 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614798 4707 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614802 4707 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614806 4707 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614810 4707 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614815 4707 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614820 4707 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614824 4707 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614828 4707 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614833 4707 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614838 4707 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614842 4707 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614846 4707 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614851 4707 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614856 4707 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614862 4707 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614868 4707 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614873 4707 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614878 4707 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614882 4707 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614887 4707 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614893 4707 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614900 4707 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614904 4707 feature_gate.go:330] unrecognized feature gate: Example Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614909 4707 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614915 4707 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614920 4707 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.614926 4707 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.614936 4707 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615085 4707 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615093 4707 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615098 4707 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615104 4707 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615110 4707 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615115 4707 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615120 4707 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615124 4707 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615129 4707 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615133 4707 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615137 4707 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615141 4707 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615146 4707 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615152 4707 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615157 4707 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615160 4707 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615164 4707 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615168 4707 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615172 4707 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615176 4707 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615181 4707 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615185 4707 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615189 4707 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615192 4707 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615196 4707 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615200 4707 feature_gate.go:330] unrecognized feature gate: Example Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615204 4707 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615208 4707 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615211 4707 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615215 4707 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615220 4707 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615226 4707 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615231 4707 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615235 4707 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615241 4707 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615245 4707 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615249 4707 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615254 4707 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615257 4707 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615261 4707 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615265 4707 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615268 4707 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615272 4707 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615276 4707 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615280 4707 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615284 4707 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615288 4707 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615292 4707 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615295 4707 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615299 4707 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615303 4707 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615308 4707 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615312 4707 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615317 4707 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615321 4707 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615326 4707 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615331 4707 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615336 4707 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615341 4707 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615345 4707 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615351 4707 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615356 4707 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615360 4707 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615364 4707 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615368 4707 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615372 4707 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615376 4707 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615380 4707 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615383 4707 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615387 4707 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.615392 4707 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.615399 4707 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.615583 4707 server.go:940] "Client rotation is on, will bootstrap in background" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.618093 4707 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.618176 4707 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.618759 4707 server.go:997] "Starting client certificate rotation" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.618779 4707 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.619191 4707 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-04 11:20:03.69549099 +0000 UTC Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.619267 4707 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.623514 4707 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.624883 4707 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 13 06:54:51 crc kubenswrapper[4707]: E1213 06:54:51.624915 4707 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.210:6443: connect: connection refused" logger="UnhandledError" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.632641 4707 log.go:25] "Validated CRI v1 runtime API" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.660082 4707 log.go:25] "Validated CRI v1 image API" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.661989 4707 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.664268 4707 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-13-06-49-30-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.664390 4707 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.676086 4707 manager.go:217] Machine: {Timestamp:2025-12-13 06:54:51.674897193 +0000 UTC m=+0.172076260 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3c05be00-dcfd-4478-b7d9-3ddef2164ce4 BootID:07dfa643-7b33-4289-a4de-483e92c58d5e Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:2b:0c:7a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:2b:0c:7a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:40:e8:23 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:fb:56:93 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:6b:01:60 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:38:fe:66 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2a:9c:f5:af:61:ec Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ea:ef:ac:4c:06:a0 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.676310 4707 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.676460 4707 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.677521 4707 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.677730 4707 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.677772 4707 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.678038 4707 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.678051 4707 container_manager_linux.go:303] "Creating device plugin manager" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.678263 4707 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.678298 4707 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.678547 4707 state_mem.go:36] "Initialized new in-memory state store" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.678668 4707 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.679658 4707 kubelet.go:418] "Attempting to sync node with API server" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.679685 4707 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.679759 4707 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.679778 4707 kubelet.go:324] "Adding apiserver pod source" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.679798 4707 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.681742 4707 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.681807 4707 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:51 crc kubenswrapper[4707]: E1213 06:54:51.681906 4707 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.210:6443: connect: connection refused" logger="UnhandledError" Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.681934 4707 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:51 crc kubenswrapper[4707]: E1213 06:54:51.682035 4707 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.210:6443: connect: connection refused" logger="UnhandledError" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.682490 4707 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.683570 4707 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.684492 4707 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.684521 4707 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.684531 4707 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.684540 4707 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.684553 4707 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.684563 4707 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.684572 4707 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.684584 4707 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.684595 4707 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.684603 4707 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.684642 4707 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.684652 4707 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.685022 4707 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.686232 4707 server.go:1280] "Started kubelet" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.686908 4707 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 06:54:51 crc systemd[1]: Started Kubernetes Kubelet. Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.688736 4707 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.688776 4707 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.688999 4707 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.691399 4707 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.691440 4707 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.691704 4707 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:52:43.9194429 +0000 UTC Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.691751 4707 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 652h57m52.227694186s for next certificate rotation Dec 13 06:54:51 crc kubenswrapper[4707]: E1213 06:54:51.691968 4707 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.210:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1880b3eeb12e03d9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 06:54:51.685405657 +0000 UTC m=+0.182584724,LastTimestamp:2025-12-13 06:54:51.685405657 +0000 UTC m=+0.182584724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.694722 4707 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.694740 4707 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.695136 4707 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 06:54:51 crc kubenswrapper[4707]: E1213 06:54:51.695320 4707 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.695593 4707 factory.go:55] Registering systemd factory Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.695617 4707 factory.go:221] Registration of the systemd container factory successfully Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.696031 4707 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:51 crc kubenswrapper[4707]: E1213 06:54:51.696091 4707 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.210:6443: connect: connection refused" logger="UnhandledError" Dec 13 06:54:51 crc kubenswrapper[4707]: E1213 06:54:51.696855 4707 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.210:6443: connect: connection refused" interval="200ms" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.697218 4707 factory.go:153] Registering CRI-O factory Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.697244 4707 factory.go:221] Registration of the crio container factory successfully Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.697327 4707 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.697357 4707 factory.go:103] Registering Raw factory Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.697376 4707 manager.go:1196] Started watching for new ooms in manager Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.698043 4707 manager.go:319] Starting recovery of all containers Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.701246 4707 server.go:460] "Adding debug handlers to kubelet server" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705360 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705415 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705432 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705448 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705462 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705473 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705486 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705499 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705515 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705529 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705541 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705554 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705566 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705582 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705596 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705608 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705623 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705635 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705646 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705659 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705671 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705704 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705718 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705731 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705760 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705774 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705789 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705802 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705815 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705829 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705842 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705854 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705867 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705882 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705895 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705908 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705922 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705934 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705947 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.705998 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706012 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706025 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706037 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706050 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706062 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706074 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706087 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706101 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706115 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706128 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706141 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706156 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706173 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706188 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706201 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706214 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706229 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706242 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706257 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706271 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706284 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706298 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706351 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706366 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706379 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706392 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706405 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706418 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706430 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706442 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706455 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706467 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706481 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706493 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706505 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706518 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706531 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706544 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706555 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706569 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706581 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706594 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706607 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706618 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706629 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706641 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706654 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706666 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706678 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706690 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706703 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706717 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706731 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706744 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706757 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706771 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706784 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706797 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706811 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706825 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706838 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706851 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706863 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706876 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706896 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706912 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706929 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.706945 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707086 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707103 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707120 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707143 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707161 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707176 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707191 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707204 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707217 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707230 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707246 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707260 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707276 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707288 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707303 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707319 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707332 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707344 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707359 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707372 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707386 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707400 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707415 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707429 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707443 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707456 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707470 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707483 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707496 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707509 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707524 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707536 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707550 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707563 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707576 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707591 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707606 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707619 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707632 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707646 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707659 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707672 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707688 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707701 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707714 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707726 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707739 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707751 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707764 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707778 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707794 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707808 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707819 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707832 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707853 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.707867 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.708809 4707 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.708846 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.708864 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.708886 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.708900 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.708913 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.708926 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.708938 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.708969 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.708986 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709000 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709013 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709027 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709039 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709051 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709065 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709079 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709091 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709382 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709400 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709424 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709437 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709466 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709487 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709501 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709520 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709534 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709549 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709570 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709584 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709603 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709616 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709631 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709648 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709661 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709681 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.709694 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.710100 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.710131 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.710159 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.710175 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.710195 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.710213 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.710228 4707 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.710241 4707 reconstruct.go:97] "Volume reconstruction finished" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.710251 4707 reconciler.go:26] "Reconciler: start to sync state" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.729412 4707 manager.go:324] Recovery completed Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.739125 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.741056 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.741137 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.741153 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.741209 4707 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.741773 4707 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.741789 4707 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.741806 4707 state_mem.go:36] "Initialized new in-memory state store" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.743645 4707 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.743714 4707 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 06:54:51 crc kubenswrapper[4707]: I1213 06:54:51.743750 4707 kubelet.go:2335] "Starting kubelet main sync loop" Dec 13 06:54:51 crc kubenswrapper[4707]: E1213 06:54:51.743813 4707 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 06:54:51 crc kubenswrapper[4707]: W1213 06:54:51.744775 4707 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:51 crc kubenswrapper[4707]: E1213 06:54:51.744852 4707 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.210:6443: connect: connection refused" logger="UnhandledError" Dec 13 06:54:51 crc kubenswrapper[4707]: E1213 06:54:51.796327 4707 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 06:54:51 crc kubenswrapper[4707]: E1213 06:54:51.844231 4707 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 06:54:51 crc kubenswrapper[4707]: E1213 06:54:51.896755 4707 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 06:54:51 crc kubenswrapper[4707]: E1213 06:54:51.898534 4707 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.210:6443: connect: connection refused" interval="400ms" Dec 13 06:54:51 crc kubenswrapper[4707]: E1213 06:54:51.997781 4707 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 06:54:52 crc kubenswrapper[4707]: E1213 06:54:52.044697 4707 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 06:54:52 crc kubenswrapper[4707]: E1213 06:54:52.098368 4707 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 06:54:52 crc kubenswrapper[4707]: E1213 06:54:52.199199 4707 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 06:54:52 crc kubenswrapper[4707]: E1213 06:54:52.299297 4707 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 06:54:52 crc kubenswrapper[4707]: E1213 06:54:52.299735 4707 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.210:6443: connect: connection refused" interval="800ms" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.324299 4707 policy_none.go:49] "None policy: Start" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.325112 4707 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.325141 4707 state_mem.go:35] "Initializing new in-memory state store" Dec 13 06:54:52 crc kubenswrapper[4707]: E1213 06:54:52.399731 4707 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 06:54:52 crc kubenswrapper[4707]: E1213 06:54:52.445867 4707 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 06:54:52 crc kubenswrapper[4707]: E1213 06:54:52.500880 4707 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.522530 4707 manager.go:334] "Starting Device Plugin manager" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.522669 4707 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.522685 4707 server.go:79] "Starting device plugin registration server" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.523181 4707 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.523197 4707 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.523529 4707 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.523649 4707 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.523658 4707 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 06:54:52 crc kubenswrapper[4707]: E1213 06:54:52.530677 4707 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 06:54:52 crc kubenswrapper[4707]: W1213 06:54:52.540497 4707 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:52 crc kubenswrapper[4707]: E1213 06:54:52.540571 4707 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.210:6443: connect: connection refused" logger="UnhandledError" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.625463 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.627296 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.627338 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.627357 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.627387 4707 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 06:54:52 crc kubenswrapper[4707]: E1213 06:54:52.627926 4707 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.210:6443: connect: connection refused" node="crc" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.689998 4707 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:52 crc kubenswrapper[4707]: W1213 06:54:52.731686 4707 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:52 crc kubenswrapper[4707]: E1213 06:54:52.731816 4707 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.210:6443: connect: connection refused" logger="UnhandledError" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.828703 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.829840 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.829889 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.829903 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:52 crc kubenswrapper[4707]: I1213 06:54:52.829925 4707 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 06:54:52 crc kubenswrapper[4707]: E1213 06:54:52.830496 4707 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.210:6443: connect: connection refused" node="crc" Dec 13 06:54:53 crc kubenswrapper[4707]: W1213 06:54:53.049354 4707 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:53 crc kubenswrapper[4707]: E1213 06:54:53.049424 4707 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.210:6443: connect: connection refused" logger="UnhandledError" Dec 13 06:54:53 crc kubenswrapper[4707]: E1213 06:54:53.100763 4707 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.210:6443: connect: connection refused" interval="1.6s" Dec 13 06:54:53 crc kubenswrapper[4707]: W1213 06:54:53.174894 4707 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:53 crc kubenswrapper[4707]: E1213 06:54:53.175082 4707 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.210:6443: connect: connection refused" logger="UnhandledError" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.231459 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.233108 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.233178 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.233207 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.233253 4707 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 06:54:53 crc kubenswrapper[4707]: E1213 06:54:53.233903 4707 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.210:6443: connect: connection refused" node="crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.246253 4707 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.246409 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.247749 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.247809 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.247834 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.248090 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.248456 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.248575 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.249598 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.249738 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.249781 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.249799 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.249924 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.250080 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.250231 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.250209 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.250492 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.251190 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.251382 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.251413 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.251882 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.251936 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.251981 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.251999 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.252006 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.251898 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.253308 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.253719 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.253821 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.253832 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.254020 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.254220 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.254352 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.254464 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.254518 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.255072 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.255193 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.255276 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.255470 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.255497 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.255511 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.255705 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.255808 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.256695 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.256757 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.256782 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.328595 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.328631 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.328652 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.328672 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.328691 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.328748 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.328780 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.328827 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.328847 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.328866 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.328885 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.328925 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.328944 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.328990 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.329008 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.430991 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431286 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431353 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431394 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431443 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431470 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431456 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431539 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431549 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431565 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431587 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431603 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431622 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431634 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431635 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431654 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431681 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431711 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431735 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431774 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431740 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431827 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431831 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431879 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431896 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.431910 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.432001 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.432081 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.432336 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.432377 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.588575 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.595465 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.610645 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.628692 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: W1213 06:54:53.632038 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-78c4e5d75c8495b2434d3f3aa6c324d6a7103f1114e8316ae3d3dd6be6327dd8 WatchSource:0}: Error finding container 78c4e5d75c8495b2434d3f3aa6c324d6a7103f1114e8316ae3d3dd6be6327dd8: Status 404 returned error can't find the container with id 78c4e5d75c8495b2434d3f3aa6c324d6a7103f1114e8316ae3d3dd6be6327dd8 Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.637262 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:54:53 crc kubenswrapper[4707]: W1213 06:54:53.648678 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-bf5e4f8920842e1e8dea7c07889ca036a6bac7ba7ddf950e3b4c12a06c2c40ea WatchSource:0}: Error finding container bf5e4f8920842e1e8dea7c07889ca036a6bac7ba7ddf950e3b4c12a06c2c40ea: Status 404 returned error can't find the container with id bf5e4f8920842e1e8dea7c07889ca036a6bac7ba7ddf950e3b4c12a06c2c40ea Dec 13 06:54:53 crc kubenswrapper[4707]: W1213 06:54:53.651381 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e35ac981e81925536adc108c2215734b86a1a885e4b83fc9e4dfc285ffbea235 WatchSource:0}: Error finding container e35ac981e81925536adc108c2215734b86a1a885e4b83fc9e4dfc285ffbea235: Status 404 returned error can't find the container with id e35ac981e81925536adc108c2215734b86a1a885e4b83fc9e4dfc285ffbea235 Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.688084 4707 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 13 06:54:53 crc kubenswrapper[4707]: E1213 06:54:53.689734 4707 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.210:6443: connect: connection refused" logger="UnhandledError" Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.689834 4707 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.749716 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e35ac981e81925536adc108c2215734b86a1a885e4b83fc9e4dfc285ffbea235"} Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.750662 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bf5e4f8920842e1e8dea7c07889ca036a6bac7ba7ddf950e3b4c12a06c2c40ea"} Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.751616 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f5d93220f7e654a2e26a4c0b55df5036634132fb9a48636b3288e3c7aa14702e"} Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.752689 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"78c4e5d75c8495b2434d3f3aa6c324d6a7103f1114e8316ae3d3dd6be6327dd8"} Dec 13 06:54:53 crc kubenswrapper[4707]: I1213 06:54:53.753879 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ef440fe1ed8b7078526fb8770aabd263af76c8e809819fd780593196e01517c1"} Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.034391 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.035515 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.035555 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.035570 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.035595 4707 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 06:54:54 crc kubenswrapper[4707]: E1213 06:54:54.036042 4707 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.210:6443: connect: connection refused" node="crc" Dec 13 06:54:54 crc kubenswrapper[4707]: W1213 06:54:54.432428 4707 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:54 crc kubenswrapper[4707]: E1213 06:54:54.432855 4707 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.210:6443: connect: connection refused" logger="UnhandledError" Dec 13 06:54:54 crc kubenswrapper[4707]: W1213 06:54:54.664834 4707 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:54 crc kubenswrapper[4707]: E1213 06:54:54.664933 4707 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.210:6443: connect: connection refused" logger="UnhandledError" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.690855 4707 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:54 crc kubenswrapper[4707]: E1213 06:54:54.701711 4707 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.210:6443: connect: connection refused" interval="3.2s" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.757850 4707 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273" exitCode=0 Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.758009 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273"} Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.758045 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.759530 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.759568 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.759582 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.760137 4707 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad" exitCode=0 Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.760250 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad"} Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.760275 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.761554 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.761585 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.761586 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.761688 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.765307 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.765349 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.765364 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.765546 4707 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="11c91c8c478bd3f6ce828d5d5b5e976b5134815f74f77e0a79fba45ede6e7b3f" exitCode=0 Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.765590 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"11c91c8c478bd3f6ce828d5d5b5e976b5134815f74f77e0a79fba45ede6e7b3f"} Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.765612 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.766825 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.766843 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.766853 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.771229 4707 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="ade8dcbdc0d460ae38d8889daff7217a9a299f98ae12e90a38075492fce36d2f" exitCode=0 Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.771340 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.771762 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"ade8dcbdc0d460ae38d8889daff7217a9a299f98ae12e90a38075492fce36d2f"} Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.772464 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.772507 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.772518 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.774470 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"827505d819824cd94bd1df8a36602336799d3681111b30a1249d29c4de62df54"} Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.774495 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d7ab5c7f562f8a78956d480ced61ee1ff36535adbf822c827071eaf16e225664"} Dec 13 06:54:54 crc kubenswrapper[4707]: I1213 06:54:54.774509 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cef91cfdee168cf2ee15b7d5568481cc380d4e4b8c4be32587e8cab097d26a90"} Dec 13 06:54:55 crc kubenswrapper[4707]: W1213 06:54:55.055212 4707 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:55 crc kubenswrapper[4707]: E1213 06:54:55.055302 4707 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.210:6443: connect: connection refused" logger="UnhandledError" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.637012 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.638573 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.638611 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.638627 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.638652 4707 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 06:54:55 crc kubenswrapper[4707]: E1213 06:54:55.639182 4707 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.210:6443: connect: connection refused" node="crc" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.690964 4707 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.780024 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"71001e5091ac315959a1d8c05707b4e1698db42bc2e88a7f85ed698cb8e2ab05"} Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.780072 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bfa3908988091d74d1ded56447cdada5eabbd8d0ac4d3ede46becc852ee1d9f2"} Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.780094 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"292c2efd17bb51a91a2e76b5307dddeefb0b5805dc141282af307cb075c6b045"} Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.780109 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8482e5f939a824f858bf8dd2192e01d5a417e3635f1b5fe3e8f43fc303d16537"} Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.782830 4707 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f" exitCode=0 Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.782931 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f"} Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.783659 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.786158 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.786205 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.786229 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.789306 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e554b4e7ffa5fe1326252d3a7a0cb5db29682f98882f5542e66139f20c1b8f99"} Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.789489 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.791691 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.791737 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.791844 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.796609 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2a1375028a00b92c6c7b55cb31957ace06bffdca33e43c4af1d5d8c9541650d7"} Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.796647 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.796660 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"775db34acb80b11e6b011f91562f4891818796bc1325cb07e939a15827f948a0"} Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.796676 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"47206b1e28eeebaa4c5878e18539c274f4a3276a4fd41721f3907f2cb5e46e5b"} Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.797974 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.798081 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.798155 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.803638 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c70dd9071c002e3995bda9569fcfa3c7ea52934e628745695bd8107dd3c39626"} Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.804015 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.805013 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.805039 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:55 crc kubenswrapper[4707]: I1213 06:54:55.805050 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:56 crc kubenswrapper[4707]: W1213 06:54:56.006176 4707 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.210:6443: connect: connection refused Dec 13 06:54:56 crc kubenswrapper[4707]: E1213 06:54:56.006277 4707 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.210:6443: connect: connection refused" logger="UnhandledError" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.809016 4707 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43" exitCode=0 Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.809122 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.809141 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43"} Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.810082 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.810118 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.810130 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.814219 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cc74458219319112fa0155aa817b40537ebd2738497f38c58022afc90ebc15fd"} Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.814283 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.814342 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.814292 4707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.814420 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.814439 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.815330 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.815363 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.815376 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.815573 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.815598 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.815610 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.816264 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.816274 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.816306 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.816284 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.816321 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:56 crc kubenswrapper[4707]: I1213 06:54:56.816339 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:57 crc kubenswrapper[4707]: I1213 06:54:57.796074 4707 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 13 06:54:57 crc kubenswrapper[4707]: I1213 06:54:57.822420 4707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 06:54:57 crc kubenswrapper[4707]: I1213 06:54:57.822480 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:57 crc kubenswrapper[4707]: I1213 06:54:57.822842 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3dcacf450402fceadabf0a9330c5b2c2fd91dee2d4347dfe37abf3a410de267a"} Dec 13 06:54:57 crc kubenswrapper[4707]: I1213 06:54:57.822926 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f57ed674f1cd4e51364c7fa4eb6565afdbd807407da9029bed4775df183b96de"} Dec 13 06:54:57 crc kubenswrapper[4707]: I1213 06:54:57.823021 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:57 crc kubenswrapper[4707]: I1213 06:54:57.823023 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"14590eac1e9d7f49370c4ba3566a7f02ac45333b9543581c49c36521861a83f6"} Dec 13 06:54:57 crc kubenswrapper[4707]: I1213 06:54:57.823162 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9f3f9d6bb6e7bdeb04a34109375b5702ed6af2305bc4b992f5d7021b317abdd5"} Dec 13 06:54:57 crc kubenswrapper[4707]: I1213 06:54:57.823186 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"40f955cbe5c2644f576bfa6fa9df930035d039aa0d90d70099d2b2349963f3a4"} Dec 13 06:54:57 crc kubenswrapper[4707]: I1213 06:54:57.823684 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:57 crc kubenswrapper[4707]: I1213 06:54:57.823734 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:57 crc kubenswrapper[4707]: I1213 06:54:57.823747 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:57 crc kubenswrapper[4707]: I1213 06:54:57.824534 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:57 crc kubenswrapper[4707]: I1213 06:54:57.824586 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:57 crc kubenswrapper[4707]: I1213 06:54:57.824637 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:58 crc kubenswrapper[4707]: I1213 06:54:58.825459 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:58 crc kubenswrapper[4707]: I1213 06:54:58.826563 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:58 crc kubenswrapper[4707]: I1213 06:54:58.826626 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:58 crc kubenswrapper[4707]: I1213 06:54:58.826657 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:58 crc kubenswrapper[4707]: I1213 06:54:58.840241 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:58 crc kubenswrapper[4707]: I1213 06:54:58.841587 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:58 crc kubenswrapper[4707]: I1213 06:54:58.841616 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:58 crc kubenswrapper[4707]: I1213 06:54:58.841625 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:58 crc kubenswrapper[4707]: I1213 06:54:58.841644 4707 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 06:54:58 crc kubenswrapper[4707]: I1213 06:54:58.847275 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:54:58 crc kubenswrapper[4707]: I1213 06:54:58.847429 4707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 06:54:58 crc kubenswrapper[4707]: I1213 06:54:58.847471 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:58 crc kubenswrapper[4707]: I1213 06:54:58.848890 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:58 crc kubenswrapper[4707]: I1213 06:54:58.848941 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:58 crc kubenswrapper[4707]: I1213 06:54:58.848974 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:54:59 crc kubenswrapper[4707]: I1213 06:54:59.844916 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:54:59 crc kubenswrapper[4707]: I1213 06:54:59.845053 4707 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 06:54:59 crc kubenswrapper[4707]: I1213 06:54:59.845095 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:54:59 crc kubenswrapper[4707]: I1213 06:54:59.845985 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:54:59 crc kubenswrapper[4707]: I1213 06:54:59.846015 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:54:59 crc kubenswrapper[4707]: I1213 06:54:59.846026 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.462158 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.462450 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.463479 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.463523 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.463539 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.634840 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.639409 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.829712 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.829816 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.830715 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.830745 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.830757 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.873378 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.873606 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.874875 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.874911 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:00 crc kubenswrapper[4707]: I1213 06:55:00.874923 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:01 crc kubenswrapper[4707]: I1213 06:55:01.643879 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:55:01 crc kubenswrapper[4707]: I1213 06:55:01.644108 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:55:01 crc kubenswrapper[4707]: I1213 06:55:01.645149 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:01 crc kubenswrapper[4707]: I1213 06:55:01.645186 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:01 crc kubenswrapper[4707]: I1213 06:55:01.645199 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:01 crc kubenswrapper[4707]: I1213 06:55:01.832300 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:55:01 crc kubenswrapper[4707]: I1213 06:55:01.833340 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:01 crc kubenswrapper[4707]: I1213 06:55:01.833368 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:01 crc kubenswrapper[4707]: I1213 06:55:01.833385 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:02 crc kubenswrapper[4707]: I1213 06:55:02.207754 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 06:55:02 crc kubenswrapper[4707]: E1213 06:55:02.530773 4707 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 06:55:02 crc kubenswrapper[4707]: I1213 06:55:02.604062 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 13 06:55:02 crc kubenswrapper[4707]: I1213 06:55:02.604259 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:55:02 crc kubenswrapper[4707]: I1213 06:55:02.605206 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:02 crc kubenswrapper[4707]: I1213 06:55:02.605243 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:02 crc kubenswrapper[4707]: I1213 06:55:02.605255 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:02 crc kubenswrapper[4707]: I1213 06:55:02.835086 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:55:02 crc kubenswrapper[4707]: I1213 06:55:02.836243 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:02 crc kubenswrapper[4707]: I1213 06:55:02.836307 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:02 crc kubenswrapper[4707]: I1213 06:55:02.836333 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:03 crc kubenswrapper[4707]: I1213 06:55:03.462892 4707 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 06:55:03 crc kubenswrapper[4707]: I1213 06:55:03.462996 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 13 06:55:05 crc kubenswrapper[4707]: I1213 06:55:05.190697 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 13 06:55:05 crc kubenswrapper[4707]: I1213 06:55:05.191467 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:55:05 crc kubenswrapper[4707]: I1213 06:55:05.193369 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:05 crc kubenswrapper[4707]: I1213 06:55:05.193430 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:05 crc kubenswrapper[4707]: I1213 06:55:05.193453 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:05 crc kubenswrapper[4707]: I1213 06:55:05.533827 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 06:55:05 crc kubenswrapper[4707]: I1213 06:55:05.533946 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:55:05 crc kubenswrapper[4707]: I1213 06:55:05.535051 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:05 crc kubenswrapper[4707]: I1213 06:55:05.535084 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:05 crc kubenswrapper[4707]: I1213 06:55:05.535098 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:06 crc kubenswrapper[4707]: I1213 06:55:06.690859 4707 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 13 06:55:07 crc kubenswrapper[4707]: I1213 06:55:07.767359 4707 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 13 06:55:07 crc kubenswrapper[4707]: I1213 06:55:07.767414 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 13 06:55:07 crc kubenswrapper[4707]: I1213 06:55:07.778841 4707 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 13 06:55:07 crc kubenswrapper[4707]: I1213 06:55:07.778895 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 13 06:55:07 crc kubenswrapper[4707]: I1213 06:55:07.850099 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 13 06:55:07 crc kubenswrapper[4707]: I1213 06:55:07.851193 4707 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cc74458219319112fa0155aa817b40537ebd2738497f38c58022afc90ebc15fd" exitCode=255 Dec 13 06:55:07 crc kubenswrapper[4707]: I1213 06:55:07.851230 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cc74458219319112fa0155aa817b40537ebd2738497f38c58022afc90ebc15fd"} Dec 13 06:55:07 crc kubenswrapper[4707]: I1213 06:55:07.851353 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:55:07 crc kubenswrapper[4707]: I1213 06:55:07.852160 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:07 crc kubenswrapper[4707]: I1213 06:55:07.852181 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:07 crc kubenswrapper[4707]: I1213 06:55:07.852189 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:07 crc kubenswrapper[4707]: I1213 06:55:07.852582 4707 scope.go:117] "RemoveContainer" containerID="cc74458219319112fa0155aa817b40537ebd2738497f38c58022afc90ebc15fd" Dec 13 06:55:09 crc kubenswrapper[4707]: I1213 06:55:09.852106 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:55:09 crc kubenswrapper[4707]: I1213 06:55:09.858355 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 13 06:55:09 crc kubenswrapper[4707]: I1213 06:55:09.860094 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e"} Dec 13 06:55:10 crc kubenswrapper[4707]: I1213 06:55:10.863439 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:55:10 crc kubenswrapper[4707]: I1213 06:55:10.864679 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:10 crc kubenswrapper[4707]: I1213 06:55:10.864726 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:10 crc kubenswrapper[4707]: I1213 06:55:10.864738 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:10 crc kubenswrapper[4707]: I1213 06:55:10.868294 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:55:11 crc kubenswrapper[4707]: I1213 06:55:11.644583 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:55:11 crc kubenswrapper[4707]: I1213 06:55:11.871865 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:55:11 crc kubenswrapper[4707]: I1213 06:55:11.872948 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:11 crc kubenswrapper[4707]: I1213 06:55:11.873009 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:11 crc kubenswrapper[4707]: I1213 06:55:11.873024 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:12 crc kubenswrapper[4707]: E1213 06:55:12.530906 4707 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 13 06:55:12 crc kubenswrapper[4707]: E1213 06:55:12.761022 4707 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 13 06:55:12 crc kubenswrapper[4707]: I1213 06:55:12.762081 4707 trace.go:236] Trace[1980130143]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Dec-2025 06:54:59.848) (total time: 12913ms): Dec 13 06:55:12 crc kubenswrapper[4707]: Trace[1980130143]: ---"Objects listed" error: 12913ms (06:55:12.761) Dec 13 06:55:12 crc kubenswrapper[4707]: Trace[1980130143]: [12.913609666s] [12.913609666s] END Dec 13 06:55:12 crc kubenswrapper[4707]: I1213 06:55:12.762149 4707 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 13 06:55:12 crc kubenswrapper[4707]: E1213 06:55:12.764907 4707 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 13 06:55:12 crc kubenswrapper[4707]: I1213 06:55:12.874864 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 13 06:55:12 crc kubenswrapper[4707]: I1213 06:55:12.875252 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 13 06:55:12 crc kubenswrapper[4707]: I1213 06:55:12.876723 4707 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e" exitCode=255 Dec 13 06:55:12 crc kubenswrapper[4707]: I1213 06:55:12.876774 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e"} Dec 13 06:55:12 crc kubenswrapper[4707]: I1213 06:55:12.876828 4707 scope.go:117] "RemoveContainer" containerID="cc74458219319112fa0155aa817b40537ebd2738497f38c58022afc90ebc15fd" Dec 13 06:55:12 crc kubenswrapper[4707]: I1213 06:55:12.876997 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:55:12 crc kubenswrapper[4707]: I1213 06:55:12.877823 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:12 crc kubenswrapper[4707]: I1213 06:55:12.877850 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:12 crc kubenswrapper[4707]: I1213 06:55:12.877863 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:12 crc kubenswrapper[4707]: I1213 06:55:12.878558 4707 scope.go:117] "RemoveContainer" containerID="63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e" Dec 13 06:55:12 crc kubenswrapper[4707]: E1213 06:55:12.878769 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 13 06:55:13 crc kubenswrapper[4707]: I1213 06:55:13.463014 4707 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 06:55:13 crc kubenswrapper[4707]: I1213 06:55:13.463079 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 06:55:13 crc kubenswrapper[4707]: I1213 06:55:13.878881 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:55:13 crc kubenswrapper[4707]: I1213 06:55:13.879663 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:13 crc kubenswrapper[4707]: I1213 06:55:13.879686 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:13 crc kubenswrapper[4707]: I1213 06:55:13.879694 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:13 crc kubenswrapper[4707]: I1213 06:55:13.880118 4707 scope.go:117] "RemoveContainer" containerID="63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e" Dec 13 06:55:14 crc kubenswrapper[4707]: E1213 06:55:14.288761 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 13 06:55:14 crc kubenswrapper[4707]: I1213 06:55:14.289011 4707 trace.go:236] Trace[1576537609]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Dec-2025 06:55:00.413) (total time: 13875ms): Dec 13 06:55:14 crc kubenswrapper[4707]: Trace[1576537609]: ---"Objects listed" error: 13875ms (06:55:14.288) Dec 13 06:55:14 crc kubenswrapper[4707]: Trace[1576537609]: [13.875556353s] [13.875556353s] END Dec 13 06:55:14 crc kubenswrapper[4707]: I1213 06:55:14.289036 4707 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 13 06:55:14 crc kubenswrapper[4707]: I1213 06:55:14.289475 4707 trace.go:236] Trace[1759811541]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Dec-2025 06:55:00.392) (total time: 13896ms): Dec 13 06:55:14 crc kubenswrapper[4707]: Trace[1759811541]: ---"Objects listed" error: 13896ms (06:55:14.289) Dec 13 06:55:14 crc kubenswrapper[4707]: Trace[1759811541]: [13.896902411s] [13.896902411s] END Dec 13 06:55:14 crc kubenswrapper[4707]: I1213 06:55:14.289507 4707 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 13 06:55:14 crc kubenswrapper[4707]: I1213 06:55:14.289787 4707 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 13 06:55:14 crc kubenswrapper[4707]: I1213 06:55:14.290594 4707 trace.go:236] Trace[248702858]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Dec-2025 06:54:59.470) (total time: 14819ms): Dec 13 06:55:14 crc kubenswrapper[4707]: Trace[248702858]: ---"Objects listed" error: 14819ms (06:55:14.290) Dec 13 06:55:14 crc kubenswrapper[4707]: Trace[248702858]: [14.819750145s] [14.819750145s] END Dec 13 06:55:14 crc kubenswrapper[4707]: I1213 06:55:14.290620 4707 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 13 06:55:14 crc kubenswrapper[4707]: I1213 06:55:14.327901 4707 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 13 06:55:14 crc kubenswrapper[4707]: I1213 06:55:14.328042 4707 csr.go:257] certificate signing request csr-4n2vd is issued Dec 13 06:55:14 crc kubenswrapper[4707]: I1213 06:55:14.696600 4707 apiserver.go:52] "Watching apiserver" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.334535 4707 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-13 06:50:14 +0000 UTC, rotation deadline is 2026-09-26 05:52:56.293844342 +0000 UTC Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.334624 4707 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6886h57m40.959223188s for next certificate rotation Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.393374 4707 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.393778 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.394226 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:15 crc kubenswrapper[4707]: E1213 06:55:15.394391 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.394262 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.394694 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.394734 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.394791 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.395010 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:15 crc kubenswrapper[4707]: E1213 06:55:15.395075 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:15 crc kubenswrapper[4707]: E1213 06:55:15.395085 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.395650 4707 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.397981 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.398302 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.398854 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.399029 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.399662 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.400038 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.400299 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.400534 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.400820 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.401346 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.401646 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.401756 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.402183 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.402322 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.402723 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.403530 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.403659 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.407375 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.407863 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.408140 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.408244 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.408339 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.408442 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.408740 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.408840 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.408918 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.409914 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.410170 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.398777 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.399207 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.410249 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.410686 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.410757 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.399256 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.411208 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.413245 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.413366 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.413738 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.414134 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.414528 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.414909 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.587513 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.588120 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.399608 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.399985 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.400006 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.400262 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.400496 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.400920 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.401589 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.401662 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.401930 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.402140 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.402672 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.403070 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.403137 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.588176 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-jwkdp"] Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.590548 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.590592 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.590650 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.403484 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.590686 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.403597 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.590918 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.590982 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.591015 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.591164 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.591229 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jwkdp" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.591308 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.591394 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.591466 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.404544 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.404576 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.404830 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.591977 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592183 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592195 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592365 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592395 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592452 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592473 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592492 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592513 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592531 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592548 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592566 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592585 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592855 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592896 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592929 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592977 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.592998 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593008 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593042 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593065 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593085 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593109 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593132 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593159 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593209 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593227 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593250 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593260 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593270 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593321 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593348 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593379 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593446 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593477 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593506 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593537 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593569 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593596 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593602 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593624 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593663 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593686 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593715 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593735 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593754 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593789 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593812 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593832 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593847 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593853 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593904 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593931 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.593978 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594008 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594035 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594104 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594137 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594167 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594199 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594228 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594259 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594287 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594318 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594350 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594377 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594410 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594444 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594475 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594505 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594533 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594560 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594586 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594639 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594670 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594698 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594726 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594755 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594783 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594810 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594840 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594869 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594896 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.596704 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.596773 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599529 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599574 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599602 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599627 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599650 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599676 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599702 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599726 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599750 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599774 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599799 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599823 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599846 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599869 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600235 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600271 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600297 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600320 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600346 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600368 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600388 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600421 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600445 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600468 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600492 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600516 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600545 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600568 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600588 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600613 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600646 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600672 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600701 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600724 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600747 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600769 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600790 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600813 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600838 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600863 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600892 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600919 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600945 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600990 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.601017 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.602270 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.602332 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.602444 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.602479 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.602511 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.602536 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.602563 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594102 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.594364 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.404995 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.602694 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.602588 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.602902 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.602931 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.602973 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.603027 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.603251 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.603551 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.603761 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.603792 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.603803 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.603829 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.603853 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.603876 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.603902 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.603926 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.603966 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.603993 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604019 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604045 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604096 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604127 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604156 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604189 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604211 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604237 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604263 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604286 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604310 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604336 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604366 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604398 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604426 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604452 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604528 4707 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604544 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604558 4707 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604572 4707 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604585 4707 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604597 4707 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604616 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604633 4707 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604648 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604661 4707 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604675 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604688 4707 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604701 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604715 4707 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604728 4707 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604743 4707 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604755 4707 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604768 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.604847 4707 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.606276 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.606587 4707 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.606637 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.606672 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.606689 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.606759 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.606793 4707 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.606809 4707 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.606824 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.606841 4707 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.606857 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.606873 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.606888 4707 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.606641 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.607531 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.607630 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.607644 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.407665 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.407759 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.607792 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.608196 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.608466 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.608720 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.609520 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.408096 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.408731 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.409116 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.409313 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.409322 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.409488 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.409874 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.409880 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.410131 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.410360 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.410895 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.411029 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.411062 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.411084 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.411472 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.413690 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.414018 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.414456 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.414497 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.414872 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.415213 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.588426 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.591628 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.591746 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.595065 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.404931 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.596640 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.596666 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.596691 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.596865 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.597018 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.597123 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.597197 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.597231 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.597314 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.597354 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.597534 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.598037 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.598237 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.598306 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.598379 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.598451 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.598641 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.598679 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.598845 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.598856 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599142 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599157 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599282 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599349 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.599991 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600102 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600841 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.600878 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.601146 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.601172 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.601355 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.601429 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.601559 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.601778 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.602199 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.602299 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.602580 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.602673 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.611654 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.611984 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.612167 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.612197 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.612216 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.612808 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.613179 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.613527 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.613600 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.614178 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.614472 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.615283 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.615749 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.615883 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.615883 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.615917 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.616179 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.616485 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.616541 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.616545 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.616661 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.617222 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.617628 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.618569 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.618829 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.619163 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.619199 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.619670 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.619918 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.620151 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.620446 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.620798 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.621029 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.622523 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.622882 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.623163 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.623659 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.624020 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.406891 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.629066 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.629767 4707 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.631476 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.631862 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.632084 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.632199 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.632457 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.632515 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.632667 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.632981 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.633071 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.633791 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: E1213 06:55:15.633940 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:55:16.133918774 +0000 UTC m=+24.631097881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.634080 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.634182 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.634883 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.635514 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.636488 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.637004 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.637972 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.638262 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.638554 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.639356 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.641065 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.641153 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.641388 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.641733 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.642149 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.642204 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.642406 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.642690 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.642885 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.643044 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.643399 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.643688 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.644007 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.644236 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.644560 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.644740 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.644803 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.644917 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.645079 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.645119 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.645159 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.645277 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.645344 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.645468 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: E1213 06:55:15.645552 4707 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 06:55:15 crc kubenswrapper[4707]: E1213 06:55:15.645617 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:16.145596342 +0000 UTC m=+24.642775389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 06:55:15 crc kubenswrapper[4707]: E1213 06:55:15.645715 4707 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 06:55:15 crc kubenswrapper[4707]: E1213 06:55:15.645774 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:16.145759176 +0000 UTC m=+24.642938223 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.646327 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.647070 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.649366 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.652251 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.652393 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.652779 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.653100 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.654520 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.656008 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.656350 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.657367 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.659159 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.659442 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 06:55:15 crc kubenswrapper[4707]: E1213 06:55:15.659632 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 06:55:15 crc kubenswrapper[4707]: E1213 06:55:15.659665 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 06:55:15 crc kubenswrapper[4707]: E1213 06:55:15.659685 4707 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:15 crc kubenswrapper[4707]: E1213 06:55:15.659746 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:16.159725969 +0000 UTC m=+24.656905086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:15 crc kubenswrapper[4707]: E1213 06:55:15.660993 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 06:55:15 crc kubenswrapper[4707]: E1213 06:55:15.661097 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 06:55:15 crc kubenswrapper[4707]: E1213 06:55:15.661190 4707 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:15 crc kubenswrapper[4707]: E1213 06:55:15.662349 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:16.162329631 +0000 UTC m=+24.659508728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.666877 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.670342 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.673335 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.676734 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.682405 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.690711 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.699525 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.708396 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.708776 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/97ae1fcf-604e-4b39-bbb8-7e35b00d2509-hosts-file\") pod \"node-resolver-jwkdp\" (UID: \"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\") " pod="openshift-dns/node-resolver-jwkdp" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.708819 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb7dq\" (UniqueName: \"kubernetes.io/projected/97ae1fcf-604e-4b39-bbb8-7e35b00d2509-kube-api-access-kb7dq\") pod \"node-resolver-jwkdp\" (UID: \"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\") " pod="openshift-dns/node-resolver-jwkdp" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.708891 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.708940 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709092 4707 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709112 4707 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709145 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709159 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709170 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709181 4707 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709192 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709224 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709235 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709246 4707 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709257 4707 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709268 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709299 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709316 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709328 4707 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709338 4707 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709349 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709360 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709373 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709408 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709422 4707 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709433 4707 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709444 4707 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709456 4707 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709467 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709479 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709491 4707 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709503 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709514 4707 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709572 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709608 4707 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709651 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709774 4707 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709810 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709847 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709858 4707 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709869 4707 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709877 4707 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709886 4707 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709895 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709928 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709938 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.709947 4707 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710415 4707 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710427 4707 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710436 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710472 4707 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710482 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710491 4707 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710500 4707 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710509 4707 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710517 4707 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710552 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710563 4707 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710571 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710580 4707 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710589 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710598 4707 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710652 4707 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710665 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710676 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710685 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710694 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710704 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710714 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710730 4707 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710749 4707 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710763 4707 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710773 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710783 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710792 4707 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710802 4707 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710811 4707 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710820 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710829 4707 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710839 4707 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710848 4707 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710857 4707 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710867 4707 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710875 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.710990 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711001 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711012 4707 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711021 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711030 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711043 4707 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711051 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711061 4707 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711070 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711080 4707 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711090 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711099 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711108 4707 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711116 4707 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711126 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711135 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711143 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711152 4707 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711161 4707 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711169 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711177 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711186 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711194 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711204 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711213 4707 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711221 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711230 4707 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711238 4707 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711246 4707 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711255 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711263 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.711271 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712216 4707 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712232 4707 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712242 4707 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712250 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712325 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712338 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712346 4707 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712355 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712387 4707 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712396 4707 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712406 4707 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712415 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712423 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712432 4707 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712440 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712469 4707 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712479 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712488 4707 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712496 4707 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712504 4707 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712513 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712541 4707 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712550 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712559 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712567 4707 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712575 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712588 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712596 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712604 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712613 4707 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712621 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712629 4707 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712637 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712645 4707 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712653 4707 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712661 4707 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712669 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712678 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712688 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712697 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712705 4707 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712714 4707 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712724 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712735 4707 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712747 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712757 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712766 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712774 4707 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712783 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712798 4707 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.712806 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.717915 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.743171 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.748436 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.757634 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.769398 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.778228 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.788433 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.813947 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb7dq\" (UniqueName: \"kubernetes.io/projected/97ae1fcf-604e-4b39-bbb8-7e35b00d2509-kube-api-access-kb7dq\") pod \"node-resolver-jwkdp\" (UID: \"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\") " pod="openshift-dns/node-resolver-jwkdp" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.814018 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/97ae1fcf-604e-4b39-bbb8-7e35b00d2509-hosts-file\") pod \"node-resolver-jwkdp\" (UID: \"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\") " pod="openshift-dns/node-resolver-jwkdp" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.814165 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/97ae1fcf-604e-4b39-bbb8-7e35b00d2509-hosts-file\") pod \"node-resolver-jwkdp\" (UID: \"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\") " pod="openshift-dns/node-resolver-jwkdp" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.828651 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb7dq\" (UniqueName: \"kubernetes.io/projected/97ae1fcf-604e-4b39-bbb8-7e35b00d2509-kube-api-access-kb7dq\") pod \"node-resolver-jwkdp\" (UID: \"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\") " pod="openshift-dns/node-resolver-jwkdp" Dec 13 06:55:15 crc kubenswrapper[4707]: I1213 06:55:15.941403 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jwkdp" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.008127 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.191291 4707 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.126.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.191378 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="Get \"https://192.168.126.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.216685 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.216794 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.216849 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:16 crc kubenswrapper[4707]: E1213 06:55:16.216864 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:55:17.216843971 +0000 UTC m=+25.714023038 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.216893 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:16 crc kubenswrapper[4707]: E1213 06:55:16.216910 4707 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.216941 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:16 crc kubenswrapper[4707]: E1213 06:55:16.216979 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:17.216968954 +0000 UTC m=+25.714147991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 06:55:16 crc kubenswrapper[4707]: E1213 06:55:16.217068 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 06:55:16 crc kubenswrapper[4707]: E1213 06:55:16.217086 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 06:55:16 crc kubenswrapper[4707]: E1213 06:55:16.217098 4707 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:16 crc kubenswrapper[4707]: E1213 06:55:16.217136 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:17.217125988 +0000 UTC m=+25.714305035 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:16 crc kubenswrapper[4707]: E1213 06:55:16.217206 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 06:55:16 crc kubenswrapper[4707]: E1213 06:55:16.217218 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 06:55:16 crc kubenswrapper[4707]: E1213 06:55:16.217248 4707 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:16 crc kubenswrapper[4707]: E1213 06:55:16.217284 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:17.217275532 +0000 UTC m=+25.714454579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:16 crc kubenswrapper[4707]: E1213 06:55:16.217362 4707 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 06:55:16 crc kubenswrapper[4707]: E1213 06:55:16.217418 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:17.217408655 +0000 UTC m=+25.714587702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.430128 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.431146 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.431746 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.432313 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.433223 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.433344 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.434367 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.435275 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.436238 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.439000 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.439869 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.439917 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.442808 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.444254 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.446034 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.447984 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.448902 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.452414 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.453289 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.455731 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.456315 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.457139 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.458499 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.459237 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.460587 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.461249 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.462682 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.463311 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.464895 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.466082 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.466864 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.468250 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.468980 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.470534 4707 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.470679 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.473079 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.474372 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.475074 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.478106 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.479021 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.480256 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.481175 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.482593 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.483290 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.484342 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.485304 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.485901 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.486369 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.487254 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.488069 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.488114 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.488807 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.489302 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.490110 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.490661 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.491700 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.492288 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.492735 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.493616 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-ns45v"] Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.493856 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-kmn45"] Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.494027 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.494084 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rqdnb"] Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.494155 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.495802 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wg6dr"] Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.496653 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.497022 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.497067 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.497262 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.497339 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.497580 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.497647 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.497736 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.497766 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.497792 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.497648 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.499571 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.499871 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.500067 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.500148 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.500326 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.500407 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.500592 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.500630 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.500763 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.502687 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.505035 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.507744 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: W1213 06:55:16.516077 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-fe6c0d6497128614270047deef1a3643d148bffc6d6796541893091282967e71 WatchSource:0}: Error finding container fe6c0d6497128614270047deef1a3643d148bffc6d6796541893091282967e71: Status 404 returned error can't find the container with id fe6c0d6497128614270047deef1a3643d148bffc6d6796541893091282967e71 Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519215 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-slash\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519270 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-ovn\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519298 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-env-overrides\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519318 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-multus-cni-dir\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519340 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovn-node-metrics-cert\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519362 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-kubelet\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519386 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-openvswitch\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519407 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-var-lib-kubelet\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519428 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d5a779a-7592-4da7-950b-3a0679d7a4bf-proxy-tls\") pod \"machine-config-daemon-ns45v\" (UID: \"5d5a779a-7592-4da7-950b-3a0679d7a4bf\") " pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519447 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-run-ovn-kubernetes\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519466 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-system-cni-dir\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519485 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5aefe869-3e59-4e2c-a755-4a727d68d3aa-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519517 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-systemd\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519535 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-etc-openvswitch\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519554 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-cni-binary-copy\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519573 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5d5a779a-7592-4da7-950b-3a0679d7a4bf-rootfs\") pod \"machine-config-daemon-ns45v\" (UID: \"5d5a779a-7592-4da7-950b-3a0679d7a4bf\") " pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519590 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-log-socket\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519625 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d5a779a-7592-4da7-950b-3a0679d7a4bf-mcd-auth-proxy-config\") pod \"machine-config-daemon-ns45v\" (UID: \"5d5a779a-7592-4da7-950b-3a0679d7a4bf\") " pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519643 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wpzt\" (UniqueName: \"kubernetes.io/projected/5d5a779a-7592-4da7-950b-3a0679d7a4bf-kube-api-access-4wpzt\") pod \"machine-config-daemon-ns45v\" (UID: \"5d5a779a-7592-4da7-950b-3a0679d7a4bf\") " pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519663 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-node-log\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519683 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-cni-bin\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519703 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovnkube-config\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519725 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-var-lib-cni-multus\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519745 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-hostroot\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519765 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-multus-daemon-config\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519785 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-run-multus-certs\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519804 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5aefe869-3e59-4e2c-a755-4a727d68d3aa-system-cni-dir\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519823 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5aefe869-3e59-4e2c-a755-4a727d68d3aa-os-release\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519863 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-multus-conf-dir\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519884 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-etc-kubernetes\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519905 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r2l8\" (UniqueName: \"kubernetes.io/projected/5aefe869-3e59-4e2c-a755-4a727d68d3aa-kube-api-access-9r2l8\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519928 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-run-netns\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519968 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-var-lib-openvswitch\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.519991 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520015 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-os-release\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520034 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-multus-socket-dir-parent\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520055 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjvt2\" (UniqueName: \"kubernetes.io/projected/75175fa9-a1b3-470e-a782-8edf5b3d464b-kube-api-access-rjvt2\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520073 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5aefe869-3e59-4e2c-a755-4a727d68d3aa-cnibin\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520116 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-systemd-units\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520137 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-cnibin\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520159 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-var-lib-cni-bin\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520178 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4chz\" (UniqueName: \"kubernetes.io/projected/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-kube-api-access-p4chz\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520200 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-run-k8s-cni-cncf-io\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520220 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovnkube-script-lib\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520240 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5aefe869-3e59-4e2c-a755-4a727d68d3aa-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520260 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-cni-netd\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520282 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-run-netns\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520302 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5aefe869-3e59-4e2c-a755-4a727d68d3aa-cni-binary-copy\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520338 4707 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520349 4707 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.520417 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.535748 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: W1213 06:55:16.546923 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-72d32dbc757780f5691ac322d62873a813efce318c7a366cc5ffdedf57f3d436 WatchSource:0}: Error finding container 72d32dbc757780f5691ac322d62873a813efce318c7a366cc5ffdedf57f3d436: Status 404 returned error can't find the container with id 72d32dbc757780f5691ac322d62873a813efce318c7a366cc5ffdedf57f3d436 Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.547029 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.555215 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.563654 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.569426 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.575869 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d5a779a-7592-4da7-950b-3a0679d7a4bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ns45v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.584626 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.592801 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.601409 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.617391 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621018 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-run-k8s-cni-cncf-io\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621061 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5aefe869-3e59-4e2c-a755-4a727d68d3aa-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621080 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovnkube-script-lib\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621096 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-cni-netd\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621111 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-run-netns\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621125 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5aefe869-3e59-4e2c-a755-4a727d68d3aa-cni-binary-copy\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621139 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-ovn\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621154 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-env-overrides\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621169 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-multus-cni-dir\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621182 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-slash\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621197 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovn-node-metrics-cert\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621212 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-kubelet\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621225 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-openvswitch\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621241 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-var-lib-kubelet\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621256 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d5a779a-7592-4da7-950b-3a0679d7a4bf-proxy-tls\") pod \"machine-config-daemon-ns45v\" (UID: \"5d5a779a-7592-4da7-950b-3a0679d7a4bf\") " pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621270 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-run-ovn-kubernetes\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621284 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-system-cni-dir\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621298 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5aefe869-3e59-4e2c-a755-4a727d68d3aa-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621310 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-systemd\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621324 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-etc-openvswitch\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621338 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-cni-binary-copy\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621360 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5d5a779a-7592-4da7-950b-3a0679d7a4bf-rootfs\") pod \"machine-config-daemon-ns45v\" (UID: \"5d5a779a-7592-4da7-950b-3a0679d7a4bf\") " pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621374 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-log-socket\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621400 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d5a779a-7592-4da7-950b-3a0679d7a4bf-mcd-auth-proxy-config\") pod \"machine-config-daemon-ns45v\" (UID: \"5d5a779a-7592-4da7-950b-3a0679d7a4bf\") " pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621414 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wpzt\" (UniqueName: \"kubernetes.io/projected/5d5a779a-7592-4da7-950b-3a0679d7a4bf-kube-api-access-4wpzt\") pod \"machine-config-daemon-ns45v\" (UID: \"5d5a779a-7592-4da7-950b-3a0679d7a4bf\") " pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621428 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-node-log\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621441 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-cni-bin\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621455 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-var-lib-cni-multus\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621468 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-hostroot\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621482 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-multus-daemon-config\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621496 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-run-multus-certs\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621509 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5aefe869-3e59-4e2c-a755-4a727d68d3aa-system-cni-dir\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621524 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5aefe869-3e59-4e2c-a755-4a727d68d3aa-os-release\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621540 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovnkube-config\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621523 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-multus-cni-dir\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621589 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-multus-conf-dir\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621559 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-multus-conf-dir\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621621 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-ovn\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621623 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-etc-kubernetes\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621656 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-slash\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621674 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r2l8\" (UniqueName: \"kubernetes.io/projected/5aefe869-3e59-4e2c-a755-4a727d68d3aa-kube-api-access-9r2l8\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621708 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-run-netns\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621729 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-var-lib-openvswitch\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621734 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5aefe869-3e59-4e2c-a755-4a727d68d3aa-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621767 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621747 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621809 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-os-release\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621828 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-multus-socket-dir-parent\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621845 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjvt2\" (UniqueName: \"kubernetes.io/projected/75175fa9-a1b3-470e-a782-8edf5b3d464b-kube-api-access-rjvt2\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621860 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5aefe869-3e59-4e2c-a755-4a727d68d3aa-cnibin\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621863 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5aefe869-3e59-4e2c-a755-4a727d68d3aa-cni-binary-copy\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621875 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-systemd-units\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621905 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-run-netns\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621923 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-cnibin\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621972 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-var-lib-cni-bin\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621993 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-run-netns\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621999 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4chz\" (UniqueName: \"kubernetes.io/projected/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-kube-api-access-p4chz\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.622023 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-var-lib-openvswitch\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.622056 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-os-release\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.622080 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-var-lib-kubelet\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621896 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-systemd-units\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.621639 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-etc-kubernetes\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.622098 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-multus-socket-dir-parent\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.622132 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-cnibin\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.622176 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-var-lib-cni-bin\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.622301 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-log-socket\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.622327 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-kubelet\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.622340 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5aefe869-3e59-4e2c-a755-4a727d68d3aa-cnibin\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.622349 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-openvswitch\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.622376 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-etc-openvswitch\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.622378 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-systemd\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.622400 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-run-ovn-kubernetes\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.622935 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-env-overrides\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.623026 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-system-cni-dir\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.623709 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-cni-binary-copy\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.623737 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5d5a779a-7592-4da7-950b-3a0679d7a4bf-rootfs\") pod \"machine-config-daemon-ns45v\" (UID: \"5d5a779a-7592-4da7-950b-3a0679d7a4bf\") " pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.623754 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-hostroot\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.624434 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-var-lib-cni-multus\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.624845 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovnkube-config\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.624885 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-run-multus-certs\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.624903 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-cni-netd\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.624944 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-node-log\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.624985 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-cni-bin\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.625044 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5aefe869-3e59-4e2c-a755-4a727d68d3aa-system-cni-dir\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.625080 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5aefe869-3e59-4e2c-a755-4a727d68d3aa-os-release\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.625364 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d5a779a-7592-4da7-950b-3a0679d7a4bf-mcd-auth-proxy-config\") pod \"machine-config-daemon-ns45v\" (UID: \"5d5a779a-7592-4da7-950b-3a0679d7a4bf\") " pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.625713 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-multus-daemon-config\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.625861 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovnkube-script-lib\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.626508 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-host-run-k8s-cni-cncf-io\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.627310 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5d5a779a-7592-4da7-950b-3a0679d7a4bf-proxy-tls\") pod \"machine-config-daemon-ns45v\" (UID: \"5d5a779a-7592-4da7-950b-3a0679d7a4bf\") " pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.627337 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovn-node-metrics-cert\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.628343 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.638825 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aefe869-3e59-4e2c-a755-4a727d68d3aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rqdnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.641185 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r2l8\" (UniqueName: \"kubernetes.io/projected/5aefe869-3e59-4e2c-a755-4a727d68d3aa-kube-api-access-9r2l8\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.641444 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wpzt\" (UniqueName: \"kubernetes.io/projected/5d5a779a-7592-4da7-950b-3a0679d7a4bf-kube-api-access-4wpzt\") pod \"machine-config-daemon-ns45v\" (UID: \"5d5a779a-7592-4da7-950b-3a0679d7a4bf\") " pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.643763 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4chz\" (UniqueName: \"kubernetes.io/projected/c99dbcda-959f-4e7c-80ef-ba6f268ee22d-kube-api-access-p4chz\") pod \"multus-kmn45\" (UID: \"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\") " pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.645471 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjvt2\" (UniqueName: \"kubernetes.io/projected/75175fa9-a1b3-470e-a782-8edf5b3d464b-kube-api-access-rjvt2\") pod \"ovnkube-node-wg6dr\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.647307 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.655445 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kmn45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4chz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kmn45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.663571 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.670540 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d5a779a-7592-4da7-950b-3a0679d7a4bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ns45v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.684411 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75175fa9-a1b3-470e-a782-8edf5b3d464b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg6dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.744093 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.744093 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.744149 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:16 crc kubenswrapper[4707]: E1213 06:55:16.744237 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:16 crc kubenswrapper[4707]: E1213 06:55:16.744291 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:16 crc kubenswrapper[4707]: E1213 06:55:16.744626 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.812492 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.818718 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kmn45" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.845148 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.887152 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"72d32dbc757780f5691ac322d62873a813efce318c7a366cc5ffdedf57f3d436"} Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.888468 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"fb77f1e41bd54d9fee54414129000b4a2c01dd7c8f1f67173235c9cf71bee315"} Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.889403 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jwkdp" event={"ID":"97ae1fcf-604e-4b39-bbb8-7e35b00d2509","Type":"ContainerStarted","Data":"c68bb9faefef7bfbe31b6cc9e7da99d8c00e2ab69a0366196c2584f7a0212bf5"} Dec 13 06:55:16 crc kubenswrapper[4707]: I1213 06:55:16.890538 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fe6c0d6497128614270047deef1a3643d148bffc6d6796541893091282967e71"} Dec 13 06:55:17 crc kubenswrapper[4707]: I1213 06:55:17.228820 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:55:17 crc kubenswrapper[4707]: I1213 06:55:17.228938 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:17 crc kubenswrapper[4707]: I1213 06:55:17.228994 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:17 crc kubenswrapper[4707]: I1213 06:55:17.229015 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:17 crc kubenswrapper[4707]: I1213 06:55:17.229038 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:17 crc kubenswrapper[4707]: E1213 06:55:17.229169 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 06:55:17 crc kubenswrapper[4707]: E1213 06:55:17.229188 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 06:55:17 crc kubenswrapper[4707]: E1213 06:55:17.229199 4707 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:17 crc kubenswrapper[4707]: E1213 06:55:17.229243 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:19.229230109 +0000 UTC m=+27.726409146 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:17 crc kubenswrapper[4707]: E1213 06:55:17.229445 4707 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 06:55:17 crc kubenswrapper[4707]: E1213 06:55:17.229497 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:19.229484935 +0000 UTC m=+27.726663982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 06:55:17 crc kubenswrapper[4707]: E1213 06:55:17.229541 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:55:19.229525836 +0000 UTC m=+27.726704883 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:55:17 crc kubenswrapper[4707]: E1213 06:55:17.229550 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 06:55:17 crc kubenswrapper[4707]: E1213 06:55:17.229558 4707 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 06:55:17 crc kubenswrapper[4707]: E1213 06:55:17.229571 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 06:55:17 crc kubenswrapper[4707]: E1213 06:55:17.229654 4707 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:17 crc kubenswrapper[4707]: E1213 06:55:17.229666 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:19.229636409 +0000 UTC m=+27.726815496 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 06:55:17 crc kubenswrapper[4707]: E1213 06:55:17.229687 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:19.22967841 +0000 UTC m=+27.726857457 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:17 crc kubenswrapper[4707]: I1213 06:55:17.673172 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5aefe869-3e59-4e2c-a755-4a727d68d3aa-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rqdnb\" (UID: \"5aefe869-3e59-4e2c-a755-4a727d68d3aa\") " pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:17 crc kubenswrapper[4707]: W1213 06:55:17.707641 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc99dbcda_959f_4e7c_80ef_ba6f268ee22d.slice/crio-6859ff81311665fb6f0e53f770f44f5520e55799e7e13edbffd2fd26625e15b5 WatchSource:0}: Error finding container 6859ff81311665fb6f0e53f770f44f5520e55799e7e13edbffd2fd26625e15b5: Status 404 returned error can't find the container with id 6859ff81311665fb6f0e53f770f44f5520e55799e7e13edbffd2fd26625e15b5 Dec 13 06:55:17 crc kubenswrapper[4707]: W1213 06:55:17.711946 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75175fa9_a1b3_470e_a782_8edf5b3d464b.slice/crio-cba0fc6650df7a6812c775a8b9e925939bbf1f3848e30a1e9358f2bbfde32f11 WatchSource:0}: Error finding container cba0fc6650df7a6812c775a8b9e925939bbf1f3848e30a1e9358f2bbfde32f11: Status 404 returned error can't find the container with id cba0fc6650df7a6812c775a8b9e925939bbf1f3848e30a1e9358f2bbfde32f11 Dec 13 06:55:17 crc kubenswrapper[4707]: W1213 06:55:17.715374 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d5a779a_7592_4da7_950b_3a0679d7a4bf.slice/crio-c8614b35f4bc371c2402114359551eba1fb1982b55fd12da48e41b2914ded5a2 WatchSource:0}: Error finding container c8614b35f4bc371c2402114359551eba1fb1982b55fd12da48e41b2914ded5a2: Status 404 returned error can't find the container with id c8614b35f4bc371c2402114359551eba1fb1982b55fd12da48e41b2914ded5a2 Dec 13 06:55:17 crc kubenswrapper[4707]: I1213 06:55:17.734912 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" Dec 13 06:55:17 crc kubenswrapper[4707]: I1213 06:55:17.894032 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kmn45" event={"ID":"c99dbcda-959f-4e7c-80ef-ba6f268ee22d","Type":"ContainerStarted","Data":"6859ff81311665fb6f0e53f770f44f5520e55799e7e13edbffd2fd26625e15b5"} Dec 13 06:55:17 crc kubenswrapper[4707]: I1213 06:55:17.895007 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" event={"ID":"5d5a779a-7592-4da7-950b-3a0679d7a4bf","Type":"ContainerStarted","Data":"c8614b35f4bc371c2402114359551eba1fb1982b55fd12da48e41b2914ded5a2"} Dec 13 06:55:17 crc kubenswrapper[4707]: I1213 06:55:17.896068 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerStarted","Data":"cba0fc6650df7a6812c775a8b9e925939bbf1f3848e30a1e9358f2bbfde32f11"} Dec 13 06:55:18 crc kubenswrapper[4707]: I1213 06:55:18.744418 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:18 crc kubenswrapper[4707]: I1213 06:55:18.744506 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:18 crc kubenswrapper[4707]: E1213 06:55:18.744599 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:18 crc kubenswrapper[4707]: E1213 06:55:18.744697 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:18 crc kubenswrapper[4707]: I1213 06:55:18.744436 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:18 crc kubenswrapper[4707]: E1213 06:55:18.744825 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:18 crc kubenswrapper[4707]: I1213 06:55:18.910441 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-chgvn"] Dec 13 06:55:18 crc kubenswrapper[4707]: I1213 06:55:18.910764 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-chgvn" Dec 13 06:55:18 crc kubenswrapper[4707]: I1213 06:55:18.913308 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 13 06:55:18 crc kubenswrapper[4707]: I1213 06:55:18.913321 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 13 06:55:18 crc kubenswrapper[4707]: I1213 06:55:18.914937 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 13 06:55:18 crc kubenswrapper[4707]: I1213 06:55:18.926335 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 13 06:55:18 crc kubenswrapper[4707]: I1213 06:55:18.933527 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:18 crc kubenswrapper[4707]: I1213 06:55:18.945916 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/73c86216-5dce-4edb-a7a7-7f2f569ff92f-serviceca\") pod \"node-ca-chgvn\" (UID: \"73c86216-5dce-4edb-a7a7-7f2f569ff92f\") " pod="openshift-image-registry/node-ca-chgvn" Dec 13 06:55:18 crc kubenswrapper[4707]: I1213 06:55:18.946493 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjnq6\" (UniqueName: \"kubernetes.io/projected/73c86216-5dce-4edb-a7a7-7f2f569ff92f-kube-api-access-zjnq6\") pod \"node-ca-chgvn\" (UID: \"73c86216-5dce-4edb-a7a7-7f2f569ff92f\") " pod="openshift-image-registry/node-ca-chgvn" Dec 13 06:55:18 crc kubenswrapper[4707]: I1213 06:55:18.946717 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/73c86216-5dce-4edb-a7a7-7f2f569ff92f-host\") pod \"node-ca-chgvn\" (UID: \"73c86216-5dce-4edb-a7a7-7f2f569ff92f\") " pod="openshift-image-registry/node-ca-chgvn" Dec 13 06:55:18 crc kubenswrapper[4707]: I1213 06:55:18.948036 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kmn45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4chz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kmn45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:18 crc kubenswrapper[4707]: I1213 06:55:18.959459 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chgvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73c86216-5dce-4edb-a7a7-7f2f569ff92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjnq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chgvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:18 crc kubenswrapper[4707]: I1213 06:55:18.985760 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75175fa9-a1b3-470e-a782-8edf5b3d464b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg6dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.000787 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.008031 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d5a779a-7592-4da7-950b-3a0679d7a4bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ns45v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.018823 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.027214 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.036172 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.047792 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjnq6\" (UniqueName: \"kubernetes.io/projected/73c86216-5dce-4edb-a7a7-7f2f569ff92f-kube-api-access-zjnq6\") pod \"node-ca-chgvn\" (UID: \"73c86216-5dce-4edb-a7a7-7f2f569ff92f\") " pod="openshift-image-registry/node-ca-chgvn" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.047841 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/73c86216-5dce-4edb-a7a7-7f2f569ff92f-host\") pod \"node-ca-chgvn\" (UID: \"73c86216-5dce-4edb-a7a7-7f2f569ff92f\") " pod="openshift-image-registry/node-ca-chgvn" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.047868 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/73c86216-5dce-4edb-a7a7-7f2f569ff92f-serviceca\") pod \"node-ca-chgvn\" (UID: \"73c86216-5dce-4edb-a7a7-7f2f569ff92f\") " pod="openshift-image-registry/node-ca-chgvn" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.047964 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/73c86216-5dce-4edb-a7a7-7f2f569ff92f-host\") pod \"node-ca-chgvn\" (UID: \"73c86216-5dce-4edb-a7a7-7f2f569ff92f\") " pod="openshift-image-registry/node-ca-chgvn" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.048516 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aefe869-3e59-4e2c-a755-4a727d68d3aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rqdnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.059364 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.068080 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.070463 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjnq6\" (UniqueName: \"kubernetes.io/projected/73c86216-5dce-4edb-a7a7-7f2f569ff92f-kube-api-access-zjnq6\") pod \"node-ca-chgvn\" (UID: \"73c86216-5dce-4edb-a7a7-7f2f569ff92f\") " pod="openshift-image-registry/node-ca-chgvn" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.165851 4707 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.167553 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.167596 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.167608 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.167727 4707 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.173439 4707 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.173688 4707 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.174639 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.174667 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.174676 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.174694 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.174705 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:19Z","lastTransitionTime":"2025-12-13T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.191720 4707 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"07dfa643-7b33-4289-a4de-483e92c58d5e\\\",\\\"systemUUID\\\":\\\"3c05be00-dcfd-4478-b7d9-3ddef2164ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.195318 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.195351 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.195359 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.195372 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.195381 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:19Z","lastTransitionTime":"2025-12-13T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.204267 4707 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"07dfa643-7b33-4289-a4de-483e92c58d5e\\\",\\\"systemUUID\\\":\\\"3c05be00-dcfd-4478-b7d9-3ddef2164ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.207167 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.207252 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.207263 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.207281 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.207293 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:19Z","lastTransitionTime":"2025-12-13T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.219029 4707 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"07dfa643-7b33-4289-a4de-483e92c58d5e\\\",\\\"systemUUID\\\":\\\"3c05be00-dcfd-4478-b7d9-3ddef2164ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.222225 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.222263 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.222276 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.222292 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.222303 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:19Z","lastTransitionTime":"2025-12-13T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.231702 4707 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"07dfa643-7b33-4289-a4de-483e92c58d5e\\\",\\\"systemUUID\\\":\\\"3c05be00-dcfd-4478-b7d9-3ddef2164ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.235739 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.235942 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.236062 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.236147 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.236255 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:19Z","lastTransitionTime":"2025-12-13T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.247065 4707 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"07dfa643-7b33-4289-a4de-483e92c58d5e\\\",\\\"systemUUID\\\":\\\"3c05be00-dcfd-4478-b7d9-3ddef2164ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.247553 4707 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.249278 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.249401 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.249436 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.249462 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.249537 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:55:23.249503296 +0000 UTC m=+31.746682383 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.249551 4707 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.249587 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.249632 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.249657 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.249591 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.249688 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.249700 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.249602 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.249714 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:19Z","lastTransitionTime":"2025-12-13T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.249718 4707 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.249613 4707 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.249636 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:23.249614179 +0000 UTC m=+31.746793266 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.249664 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.249885 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.249896 4707 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.249919 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:23.249894136 +0000 UTC m=+31.747073243 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.249974 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:23.249936467 +0000 UTC m=+31.747115604 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 06:55:19 crc kubenswrapper[4707]: E1213 06:55:19.250011 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:23.249997158 +0000 UTC m=+31.747176285 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.352478 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.352523 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.352534 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.352551 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.352566 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:19Z","lastTransitionTime":"2025-12-13T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.455245 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.455335 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.455373 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.455404 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.455424 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:19Z","lastTransitionTime":"2025-12-13T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.559687 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.559734 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.559745 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.559762 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.559774 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:19Z","lastTransitionTime":"2025-12-13T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.663095 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.663159 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.663178 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.663201 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.663219 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:19Z","lastTransitionTime":"2025-12-13T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.765643 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.765688 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.765700 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.765719 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.765733 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:19Z","lastTransitionTime":"2025-12-13T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.868464 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.868740 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.868851 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.868928 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.869014 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:19Z","lastTransitionTime":"2025-12-13T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.928141 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/73c86216-5dce-4edb-a7a7-7f2f569ff92f-serviceca\") pod \"node-ca-chgvn\" (UID: \"73c86216-5dce-4edb-a7a7-7f2f569ff92f\") " pod="openshift-image-registry/node-ca-chgvn" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.974693 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.974745 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.974759 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.974775 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:19 crc kubenswrapper[4707]: I1213 06:55:19.974788 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:19Z","lastTransitionTime":"2025-12-13T06:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.079090 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.079137 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.079146 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.079162 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.079172 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:20Z","lastTransitionTime":"2025-12-13T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.128012 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-chgvn" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.182817 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.182848 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.182857 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.182869 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.182878 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:20Z","lastTransitionTime":"2025-12-13T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.285736 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.285796 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.285812 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.285835 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.285852 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:20Z","lastTransitionTime":"2025-12-13T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.389017 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.389068 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.389081 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.389098 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.389110 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:20Z","lastTransitionTime":"2025-12-13T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.491326 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.491370 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.491381 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.491399 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.491412 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:20Z","lastTransitionTime":"2025-12-13T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.594118 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.594171 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.594188 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.594212 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.594229 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:20Z","lastTransitionTime":"2025-12-13T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.697404 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.697497 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.697515 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.697537 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.697554 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:20Z","lastTransitionTime":"2025-12-13T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.744499 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.744548 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.744497 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:20 crc kubenswrapper[4707]: E1213 06:55:20.744715 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:20 crc kubenswrapper[4707]: E1213 06:55:20.744792 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:20 crc kubenswrapper[4707]: E1213 06:55:20.744869 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.801008 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.801058 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.801072 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.801090 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.801106 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:20Z","lastTransitionTime":"2025-12-13T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.903422 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.903549 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.903564 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.903583 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.903596 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:20Z","lastTransitionTime":"2025-12-13T06:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:20 crc kubenswrapper[4707]: I1213 06:55:20.905769 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.005885 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.005991 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.006011 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.006035 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.006052 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:21Z","lastTransitionTime":"2025-12-13T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.108476 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.108521 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.108532 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.108549 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.108576 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:21Z","lastTransitionTime":"2025-12-13T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:21 crc kubenswrapper[4707]: W1213 06:55:21.143301 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aefe869_3e59_4e2c_a755_4a727d68d3aa.slice/crio-eff8d010fba308a4e8013b6bb751d16a8553856771828b9705c4babc79fbcfe8 WatchSource:0}: Error finding container eff8d010fba308a4e8013b6bb751d16a8553856771828b9705c4babc79fbcfe8: Status 404 returned error can't find the container with id eff8d010fba308a4e8013b6bb751d16a8553856771828b9705c4babc79fbcfe8 Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.211045 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.211092 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.211102 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.211118 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.211129 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:21Z","lastTransitionTime":"2025-12-13T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.314038 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.314085 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.314097 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.314112 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.314122 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:21Z","lastTransitionTime":"2025-12-13T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.417114 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.417425 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.417439 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.417463 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.417474 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:21Z","lastTransitionTime":"2025-12-13T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.520308 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.520354 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.520372 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.520388 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.520399 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:21Z","lastTransitionTime":"2025-12-13T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.620513 4707 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.627810 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.627854 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.627866 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.627881 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.627892 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:21Z","lastTransitionTime":"2025-12-13T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.729797 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.729836 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.729847 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.729864 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.729877 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:21Z","lastTransitionTime":"2025-12-13T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.754178 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.764891 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kmn45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4chz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kmn45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.775469 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chgvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73c86216-5dce-4edb-a7a7-7f2f569ff92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjnq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chgvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.789096 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75175fa9-a1b3-470e-a782-8edf5b3d464b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg6dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.798786 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.806605 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d5a779a-7592-4da7-950b-3a0679d7a4bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ns45v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.815492 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.823870 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.831611 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.832545 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.832622 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.832641 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.832657 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.832668 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:21Z","lastTransitionTime":"2025-12-13T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.842466 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aefe869-3e59-4e2c-a755-4a727d68d3aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rqdnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.850775 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.858387 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.913270 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" event={"ID":"5aefe869-3e59-4e2c-a755-4a727d68d3aa","Type":"ContainerStarted","Data":"eff8d010fba308a4e8013b6bb751d16a8553856771828b9705c4babc79fbcfe8"} Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.935464 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.935521 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.935699 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.935721 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:21 crc kubenswrapper[4707]: I1213 06:55:21.935733 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:21Z","lastTransitionTime":"2025-12-13T06:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.037707 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.037762 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.037778 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.037798 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.037814 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:22Z","lastTransitionTime":"2025-12-13T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.139840 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.140134 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.140163 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.140181 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.140193 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:22Z","lastTransitionTime":"2025-12-13T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.241871 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.242117 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.242235 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.242340 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.242443 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:22Z","lastTransitionTime":"2025-12-13T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.344496 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.344540 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.344556 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.344574 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.344586 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:22Z","lastTransitionTime":"2025-12-13T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.447409 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.447662 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.447788 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.447896 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.448004 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:22Z","lastTransitionTime":"2025-12-13T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.456426 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.460849 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.466252 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.510775 4707 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.517644 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kmn45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4chz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kmn45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.520563 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.529512 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chgvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73c86216-5dce-4edb-a7a7-7f2f569ff92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjnq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chgvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.550671 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.550724 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.550740 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.550760 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.550772 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:22Z","lastTransitionTime":"2025-12-13T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.556064 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75175fa9-a1b3-470e-a782-8edf5b3d464b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg6dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.574557 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.583516 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d5a779a-7592-4da7-950b-3a0679d7a4bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ns45v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.595299 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.606138 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.615159 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.619664 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.620094 4707 scope.go:117] "RemoveContainer" containerID="63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.628007 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aefe869-3e59-4e2c-a755-4a727d68d3aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rqdnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.640266 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.651371 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.653131 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.653160 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.653169 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.653183 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.653193 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:22Z","lastTransitionTime":"2025-12-13T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.660646 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.668675 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kmn45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4chz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kmn45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.674676 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chgvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73c86216-5dce-4edb-a7a7-7f2f569ff92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjnq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chgvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.684972 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bed4c7e-8a92-483b-9aca-47b6cdfba745\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8482e5f939a824f858bf8dd2192e01d5a417e3635f1b5fe3e8f43fc303d16537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa3908988091d74d1ded56447cdada5eabbd8d0ac4d3ede46becc852ee1d9f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://292c2efd17bb51a91a2e76b5307dddeefb0b5805dc141282af307cb075c6b045\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T06:55:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 06:55:10.042482 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 06:55:10.042687 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 06:55:10.043776 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2914533816/tls.crt::/tmp/serving-cert-2914533816/tls.key\\\\\\\"\\\\nI1213 06:55:10.443185 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 06:55:10.444945 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 06:55:10.445002 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 06:55:10.445026 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 06:55:10.445031 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 06:55:10.451214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1213 06:55:10.451287 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1213 06:55:10.451303 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451341 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 06:55:10.451358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 06:55:10.451365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 06:55:10.451372 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1213 06:55:10.453554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71001e5091ac315959a1d8c05707b4e1698db42bc2e88a7f85ed698cb8e2ab05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.694856 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.702357 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d5a779a-7592-4da7-950b-3a0679d7a4bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ns45v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.716256 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75175fa9-a1b3-470e-a782-8edf5b3d464b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg6dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.725113 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.735889 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aefe869-3e59-4e2c-a755-4a727d68d3aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rqdnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.744213 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.744284 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:22 crc kubenswrapper[4707]: E1213 06:55:22.744350 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:22 crc kubenswrapper[4707]: E1213 06:55:22.744439 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.744238 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:22 crc kubenswrapper[4707]: E1213 06:55:22.744556 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.748552 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d8ca7-1728-4889-b700-27684212393f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ab5c7f562f8a78956d480ced61ee1ff36535adbf822c827071eaf16e225664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef91cfdee168cf2ee15b7d5568481cc380d4e4b8c4be32587e8cab097d26a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://827505d819824cd94bd1df8a36602336799d3681111b30a1249d29c4de62df54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70dd9071c002e3995bda9569fcfa3c7ea52934e628745695bd8107dd3c39626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.755519 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.755559 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.755571 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.755588 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.755599 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:22Z","lastTransitionTime":"2025-12-13T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.760464 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.768514 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.778262 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.786102 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.857291 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.857808 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.857914 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.858015 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.858122 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:22Z","lastTransitionTime":"2025-12-13T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.916934 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-chgvn" event={"ID":"73c86216-5dce-4edb-a7a7-7f2f569ff92f","Type":"ContainerStarted","Data":"e2eed6b9061bfd51f37c31c26cdbb12eafbcb858a300847ebe450d081bd5f34d"} Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.960020 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.960073 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.960084 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.960106 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:22 crc kubenswrapper[4707]: I1213 06:55:22.960116 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:22Z","lastTransitionTime":"2025-12-13T06:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.064536 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.064582 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.064598 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.064627 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.064642 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:23Z","lastTransitionTime":"2025-12-13T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.168222 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.168267 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.168277 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.168293 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.168304 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:23Z","lastTransitionTime":"2025-12-13T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.270673 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.270714 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.270723 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.270738 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.270747 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:23Z","lastTransitionTime":"2025-12-13T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.296680 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.296783 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:23 crc kubenswrapper[4707]: E1213 06:55:23.296798 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:55:31.296775573 +0000 UTC m=+39.793954630 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.296833 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:23 crc kubenswrapper[4707]: E1213 06:55:23.296867 4707 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.296871 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:23 crc kubenswrapper[4707]: E1213 06:55:23.296933 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:31.296899816 +0000 UTC m=+39.794078863 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 06:55:23 crc kubenswrapper[4707]: E1213 06:55:23.297026 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 06:55:23 crc kubenswrapper[4707]: E1213 06:55:23.297052 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 06:55:23 crc kubenswrapper[4707]: E1213 06:55:23.297062 4707 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 06:55:23 crc kubenswrapper[4707]: E1213 06:55:23.297070 4707 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:23 crc kubenswrapper[4707]: E1213 06:55:23.297152 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:31.297133832 +0000 UTC m=+39.794312879 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 06:55:23 crc kubenswrapper[4707]: E1213 06:55:23.297202 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 06:55:23 crc kubenswrapper[4707]: E1213 06:55:23.297225 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 06:55:23 crc kubenswrapper[4707]: E1213 06:55:23.297233 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:31.297212274 +0000 UTC m=+39.794391401 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:23 crc kubenswrapper[4707]: E1213 06:55:23.297236 4707 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:23 crc kubenswrapper[4707]: E1213 06:55:23.297292 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:31.297278525 +0000 UTC m=+39.794457662 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.297331 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.372772 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.372797 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.372805 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.372818 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.372834 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:23Z","lastTransitionTime":"2025-12-13T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.475001 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.475034 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.475043 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.475056 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.475065 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:23Z","lastTransitionTime":"2025-12-13T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.577979 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.578014 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.578023 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.578036 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.578046 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:23Z","lastTransitionTime":"2025-12-13T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.680729 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.680767 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.680780 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.680797 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.680810 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:23Z","lastTransitionTime":"2025-12-13T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.783133 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.783169 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.783177 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.783190 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.783199 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:23Z","lastTransitionTime":"2025-12-13T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.885611 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.885651 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.885666 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.885688 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.885703 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:23Z","lastTransitionTime":"2025-12-13T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.921121 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jwkdp" event={"ID":"97ae1fcf-604e-4b39-bbb8-7e35b00d2509","Type":"ContainerStarted","Data":"b627378edc16df52550b7ee29b05ff370f0da087e511c979c72f677cc5d91ab9"} Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.988341 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.988371 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.988379 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.988392 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:23 crc kubenswrapper[4707]: I1213 06:55:23.988401 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:23Z","lastTransitionTime":"2025-12-13T06:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.091015 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.091048 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.091057 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.091071 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.091081 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:24Z","lastTransitionTime":"2025-12-13T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.193140 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.193184 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.193197 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.193215 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.193235 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:24Z","lastTransitionTime":"2025-12-13T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.295821 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.295872 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.295897 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.295918 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.295931 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:24Z","lastTransitionTime":"2025-12-13T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.399248 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.399311 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.399328 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.399351 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.399368 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:24Z","lastTransitionTime":"2025-12-13T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.502237 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.502285 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.502297 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.502316 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.502327 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:24Z","lastTransitionTime":"2025-12-13T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.605276 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.605335 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.605352 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.605372 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.605390 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:24Z","lastTransitionTime":"2025-12-13T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.708086 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.708175 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.708191 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.708211 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.708227 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:24Z","lastTransitionTime":"2025-12-13T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.744683 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.744725 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.744785 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:24 crc kubenswrapper[4707]: E1213 06:55:24.744816 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:24 crc kubenswrapper[4707]: E1213 06:55:24.744917 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:24 crc kubenswrapper[4707]: E1213 06:55:24.745015 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.811650 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.811710 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.811723 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.811747 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.811759 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:24Z","lastTransitionTime":"2025-12-13T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.914966 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.915016 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.915027 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.915045 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:24 crc kubenswrapper[4707]: I1213 06:55:24.915057 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:24Z","lastTransitionTime":"2025-12-13T06:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.016873 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.016909 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.016917 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.016930 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.016938 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:25Z","lastTransitionTime":"2025-12-13T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.119917 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.120047 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.120064 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.120081 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.120095 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:25Z","lastTransitionTime":"2025-12-13T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.208809 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.219874 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.226618 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.226657 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.226667 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.226681 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.226694 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:25Z","lastTransitionTime":"2025-12-13T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.230400 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.233023 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.240315 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.248924 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.270328 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aefe869-3e59-4e2c-a755-4a727d68d3aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rqdnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.289233 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d8ca7-1728-4889-b700-27684212393f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ab5c7f562f8a78956d480ced61ee1ff36535adbf822c827071eaf16e225664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef91cfdee168cf2ee15b7d5568481cc380d4e4b8c4be32587e8cab097d26a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://827505d819824cd94bd1df8a36602336799d3681111b30a1249d29c4de62df54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70dd9071c002e3995bda9569fcfa3c7ea52934e628745695bd8107dd3c39626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.306156 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.317892 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.329565 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.329619 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.329633 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.329650 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.329662 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:25Z","lastTransitionTime":"2025-12-13T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.329744 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.346582 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kmn45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4chz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kmn45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.355002 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chgvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73c86216-5dce-4edb-a7a7-7f2f569ff92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjnq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chgvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.370309 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75175fa9-a1b3-470e-a782-8edf5b3d464b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg6dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.380409 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bed4c7e-8a92-483b-9aca-47b6cdfba745\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8482e5f939a824f858bf8dd2192e01d5a417e3635f1b5fe3e8f43fc303d16537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa3908988091d74d1ded56447cdada5eabbd8d0ac4d3ede46becc852ee1d9f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://292c2efd17bb51a91a2e76b5307dddeefb0b5805dc141282af307cb075c6b045\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T06:55:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 06:55:10.042482 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 06:55:10.042687 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 06:55:10.043776 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2914533816/tls.crt::/tmp/serving-cert-2914533816/tls.key\\\\\\\"\\\\nI1213 06:55:10.443185 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 06:55:10.444945 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 06:55:10.445002 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 06:55:10.445026 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 06:55:10.445031 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 06:55:10.451214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1213 06:55:10.451287 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1213 06:55:10.451303 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451341 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 06:55:10.451358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 06:55:10.451365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 06:55:10.451372 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1213 06:55:10.453554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71001e5091ac315959a1d8c05707b4e1698db42bc2e88a7f85ed698cb8e2ab05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.390795 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.399387 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d5a779a-7592-4da7-950b-3a0679d7a4bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ns45v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.410348 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bed4c7e-8a92-483b-9aca-47b6cdfba745\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8482e5f939a824f858bf8dd2192e01d5a417e3635f1b5fe3e8f43fc303d16537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa3908988091d74d1ded56447cdada5eabbd8d0ac4d3ede46becc852ee1d9f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://292c2efd17bb51a91a2e76b5307dddeefb0b5805dc141282af307cb075c6b045\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T06:55:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 06:55:10.042482 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 06:55:10.042687 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 06:55:10.043776 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2914533816/tls.crt::/tmp/serving-cert-2914533816/tls.key\\\\\\\"\\\\nI1213 06:55:10.443185 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 06:55:10.444945 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 06:55:10.445002 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 06:55:10.445026 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 06:55:10.445031 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 06:55:10.451214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1213 06:55:10.451287 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1213 06:55:10.451303 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451341 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 06:55:10.451358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 06:55:10.451365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 06:55:10.451372 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1213 06:55:10.453554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71001e5091ac315959a1d8c05707b4e1698db42bc2e88a7f85ed698cb8e2ab05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.424160 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.431303 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.431350 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.431362 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.431377 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.431390 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:25Z","lastTransitionTime":"2025-12-13T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.433815 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d5a779a-7592-4da7-950b-3a0679d7a4bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ns45v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.458232 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75175fa9-a1b3-470e-a782-8edf5b3d464b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg6dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.489918 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ff06f9d-c77a-4cc7-ad1e-b5a35178a101\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3f9d6bb6e7bdeb04a34109375b5702ed6af2305bc4b992f5d7021b317abdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14590eac1e9d7f49370c4ba3566a7f02ac45333b9543581c49c36521861a83f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57ed674f1cd4e51364c7fa4eb6565afdbd807407da9029bed4775df183b96de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcacf450402fceadabf0a9330c5b2c2fd91dee2d4347dfe37abf3a410de267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f955cbe5c2644f576bfa6fa9df930035d039aa0d90d70099d2b2349963f3a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.504592 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.514256 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.526991 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aefe869-3e59-4e2c-a755-4a727d68d3aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rqdnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.533697 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.533740 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.533752 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.533772 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.533783 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:25Z","lastTransitionTime":"2025-12-13T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.538059 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d8ca7-1728-4889-b700-27684212393f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ab5c7f562f8a78956d480ced61ee1ff36535adbf822c827071eaf16e225664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef91cfdee168cf2ee15b7d5568481cc380d4e4b8c4be32587e8cab097d26a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://827505d819824cd94bd1df8a36602336799d3681111b30a1249d29c4de62df54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70dd9071c002e3995bda9569fcfa3c7ea52934e628745695bd8107dd3c39626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.546239 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.553896 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.564736 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.574777 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.583609 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kmn45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4chz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kmn45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.590540 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chgvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73c86216-5dce-4edb-a7a7-7f2f569ff92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjnq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chgvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.636367 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.636417 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.636431 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.636448 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.636461 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:25Z","lastTransitionTime":"2025-12-13T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.739709 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.739762 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.739779 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.739804 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.739820 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:25Z","lastTransitionTime":"2025-12-13T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.842036 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.842081 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.842094 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.842114 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.842125 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:25Z","lastTransitionTime":"2025-12-13T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.944616 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.944666 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.944678 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.944695 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:25 crc kubenswrapper[4707]: I1213 06:55:25.944706 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:25Z","lastTransitionTime":"2025-12-13T06:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.047022 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.047097 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.047116 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.047139 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.047155 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:26Z","lastTransitionTime":"2025-12-13T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.149664 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.149694 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.149702 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.149715 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.149725 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:26Z","lastTransitionTime":"2025-12-13T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.252530 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.252739 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.252761 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.252781 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.252801 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:26Z","lastTransitionTime":"2025-12-13T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.355599 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.355631 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.355640 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.355654 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.355663 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:26Z","lastTransitionTime":"2025-12-13T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.458448 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.458541 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.458556 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.458576 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.458591 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:26Z","lastTransitionTime":"2025-12-13T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.561340 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.561401 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.561418 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.561441 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.561460 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:26Z","lastTransitionTime":"2025-12-13T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.663509 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.663552 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.663562 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.663575 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.663584 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:26Z","lastTransitionTime":"2025-12-13T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.744650 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.744688 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:26 crc kubenswrapper[4707]: E1213 06:55:26.744792 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.744901 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:26 crc kubenswrapper[4707]: E1213 06:55:26.744934 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:26 crc kubenswrapper[4707]: E1213 06:55:26.745440 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.765709 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.765739 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.765748 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.765761 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.765770 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:26Z","lastTransitionTime":"2025-12-13T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.868588 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.868638 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.868654 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.868671 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.868685 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:26Z","lastTransitionTime":"2025-12-13T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.933123 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a860dbd018550c8e8fb4c363e86f4e30b2242def74c904879506ac45661dbe3c"} Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.971077 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.971103 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.971113 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.971126 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:26 crc kubenswrapper[4707]: I1213 06:55:26.971134 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:26Z","lastTransitionTime":"2025-12-13T06:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.073637 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.073665 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.073674 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.073686 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.073696 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:27Z","lastTransitionTime":"2025-12-13T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.175520 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.175587 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.175600 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.175615 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.175625 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:27Z","lastTransitionTime":"2025-12-13T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.278445 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.278482 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.278493 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.278507 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.278516 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:27Z","lastTransitionTime":"2025-12-13T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.381767 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.381810 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.381821 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.381839 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.381850 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:27Z","lastTransitionTime":"2025-12-13T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.484211 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.484252 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.484264 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.484278 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.484286 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:27Z","lastTransitionTime":"2025-12-13T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.501503 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm"] Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.502241 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.505149 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.506223 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.515819 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bed4c7e-8a92-483b-9aca-47b6cdfba745\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8482e5f939a824f858bf8dd2192e01d5a417e3635f1b5fe3e8f43fc303d16537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa3908988091d74d1ded56447cdada5eabbd8d0ac4d3ede46becc852ee1d9f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://292c2efd17bb51a91a2e76b5307dddeefb0b5805dc141282af307cb075c6b045\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T06:55:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 06:55:10.042482 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 06:55:10.042687 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 06:55:10.043776 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2914533816/tls.crt::/tmp/serving-cert-2914533816/tls.key\\\\\\\"\\\\nI1213 06:55:10.443185 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 06:55:10.444945 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 06:55:10.445002 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 06:55:10.445026 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 06:55:10.445031 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 06:55:10.451214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1213 06:55:10.451287 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1213 06:55:10.451303 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451341 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 06:55:10.451358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 06:55:10.451365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 06:55:10.451372 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1213 06:55:10.453554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71001e5091ac315959a1d8c05707b4e1698db42bc2e88a7f85ed698cb8e2ab05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.525931 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.538416 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d5a779a-7592-4da7-950b-3a0679d7a4bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ns45v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.556571 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75175fa9-a1b3-470e-a782-8edf5b3d464b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg6dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.579241 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ff06f9d-c77a-4cc7-ad1e-b5a35178a101\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3f9d6bb6e7bdeb04a34109375b5702ed6af2305bc4b992f5d7021b317abdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14590eac1e9d7f49370c4ba3566a7f02ac45333b9543581c49c36521861a83f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57ed674f1cd4e51364c7fa4eb6565afdbd807407da9029bed4775df183b96de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcacf450402fceadabf0a9330c5b2c2fd91dee2d4347dfe37abf3a410de267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f955cbe5c2644f576bfa6fa9df930035d039aa0d90d70099d2b2349963f3a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.586621 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.586667 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.586679 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.586697 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.586708 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:27Z","lastTransitionTime":"2025-12-13T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.590183 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.601075 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d8ca7-1728-4889-b700-27684212393f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ab5c7f562f8a78956d480ced61ee1ff36535adbf822c827071eaf16e225664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef91cfdee168cf2ee15b7d5568481cc380d4e4b8c4be32587e8cab097d26a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://827505d819824cd94bd1df8a36602336799d3681111b30a1249d29c4de62df54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70dd9071c002e3995bda9569fcfa3c7ea52934e628745695bd8107dd3c39626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.611534 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.620791 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.630345 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.636548 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.638580 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsqj7\" (UniqueName: \"kubernetes.io/projected/f4633ffb-aea7-4c92-951e-92309d8bc2e6-kube-api-access-bsqj7\") pod \"ovnkube-control-plane-749d76644c-bzfmm\" (UID: \"f4633ffb-aea7-4c92-951e-92309d8bc2e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.638627 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f4633ffb-aea7-4c92-951e-92309d8bc2e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bzfmm\" (UID: \"f4633ffb-aea7-4c92-951e-92309d8bc2e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.638670 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f4633ffb-aea7-4c92-951e-92309d8bc2e6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bzfmm\" (UID: \"f4633ffb-aea7-4c92-951e-92309d8bc2e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.638726 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f4633ffb-aea7-4c92-951e-92309d8bc2e6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bzfmm\" (UID: \"f4633ffb-aea7-4c92-951e-92309d8bc2e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.646130 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aefe869-3e59-4e2c-a755-4a727d68d3aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rqdnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.654320 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.662929 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kmn45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4chz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kmn45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.670036 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chgvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73c86216-5dce-4edb-a7a7-7f2f569ff92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjnq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chgvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.677992 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4633ffb-aea7-4c92-951e-92309d8bc2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.689003 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.689038 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.689050 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.689065 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.689076 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:27Z","lastTransitionTime":"2025-12-13T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.739622 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f4633ffb-aea7-4c92-951e-92309d8bc2e6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bzfmm\" (UID: \"f4633ffb-aea7-4c92-951e-92309d8bc2e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.739685 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsqj7\" (UniqueName: \"kubernetes.io/projected/f4633ffb-aea7-4c92-951e-92309d8bc2e6-kube-api-access-bsqj7\") pod \"ovnkube-control-plane-749d76644c-bzfmm\" (UID: \"f4633ffb-aea7-4c92-951e-92309d8bc2e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.739727 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f4633ffb-aea7-4c92-951e-92309d8bc2e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bzfmm\" (UID: \"f4633ffb-aea7-4c92-951e-92309d8bc2e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.739802 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f4633ffb-aea7-4c92-951e-92309d8bc2e6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bzfmm\" (UID: \"f4633ffb-aea7-4c92-951e-92309d8bc2e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.740374 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f4633ffb-aea7-4c92-951e-92309d8bc2e6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bzfmm\" (UID: \"f4633ffb-aea7-4c92-951e-92309d8bc2e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.740978 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f4633ffb-aea7-4c92-951e-92309d8bc2e6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bzfmm\" (UID: \"f4633ffb-aea7-4c92-951e-92309d8bc2e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.745703 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f4633ffb-aea7-4c92-951e-92309d8bc2e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bzfmm\" (UID: \"f4633ffb-aea7-4c92-951e-92309d8bc2e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.780799 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsqj7\" (UniqueName: \"kubernetes.io/projected/f4633ffb-aea7-4c92-951e-92309d8bc2e6-kube-api-access-bsqj7\") pod \"ovnkube-control-plane-749d76644c-bzfmm\" (UID: \"f4633ffb-aea7-4c92-951e-92309d8bc2e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.790912 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.791141 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.791208 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.791296 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.791368 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:27Z","lastTransitionTime":"2025-12-13T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.817572 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" Dec 13 06:55:27 crc kubenswrapper[4707]: W1213 06:55:27.831099 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4633ffb_aea7_4c92_951e_92309d8bc2e6.slice/crio-3bfb426306781655371d43cbd25c8b03c3b0a4e763379fb8324a2278f8e0cedf WatchSource:0}: Error finding container 3bfb426306781655371d43cbd25c8b03c3b0a4e763379fb8324a2278f8e0cedf: Status 404 returned error can't find the container with id 3bfb426306781655371d43cbd25c8b03c3b0a4e763379fb8324a2278f8e0cedf Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.893392 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.893427 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.893436 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.893450 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.893459 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:27Z","lastTransitionTime":"2025-12-13T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.937443 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerStarted","Data":"c3048549c12991cf593cee41418f3ac0b657563bac3eb4a98be4eb1b7fe17258"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.938748 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" event={"ID":"5aefe869-3e59-4e2c-a755-4a727d68d3aa","Type":"ContainerStarted","Data":"dec9a0e60e7f6f0ecebce3430404f79b511f38b3608b2a7ed4c4ed704f55554d"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.940234 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kmn45" event={"ID":"c99dbcda-959f-4e7c-80ef-ba6f268ee22d","Type":"ContainerStarted","Data":"5782049297d448b980c5e8cb63156bb34c8a95006091dc4e2437d3cc88913849"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.941977 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b2c4431bf44f99759e88aa95e082d8df4de96312d791f8506cee2a2c1da8cf6b"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.944435 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-chgvn" event={"ID":"73c86216-5dce-4edb-a7a7-7f2f569ff92f","Type":"ContainerStarted","Data":"7c450638963071151c800107d2b11cbb93a48d24148fb0085479270ba9e7aec3"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.945557 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" event={"ID":"f4633ffb-aea7-4c92-951e-92309d8bc2e6","Type":"ContainerStarted","Data":"3bfb426306781655371d43cbd25c8b03c3b0a4e763379fb8324a2278f8e0cedf"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.946883 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"402d5e0ffda9d8c5d558fe1fa6d465a4112cb248ec280588304406305c4fa01f"} Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.996214 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.996272 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.996287 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.996305 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:27 crc kubenswrapper[4707]: I1213 06:55:27.996319 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:27Z","lastTransitionTime":"2025-12-13T06:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.098428 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.098461 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.098471 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.098486 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.098495 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:28Z","lastTransitionTime":"2025-12-13T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.200488 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.200525 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.200536 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.200551 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.200563 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:28Z","lastTransitionTime":"2025-12-13T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.302690 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.302744 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.302762 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.302786 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.302800 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:28Z","lastTransitionTime":"2025-12-13T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.406404 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.406441 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.406451 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.406466 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.406476 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:28Z","lastTransitionTime":"2025-12-13T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.509160 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.509214 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.509229 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.509245 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.509256 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:28Z","lastTransitionTime":"2025-12-13T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.612664 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.612728 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.612746 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.612802 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.612824 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:28Z","lastTransitionTime":"2025-12-13T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.715222 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.715264 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.715276 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.715291 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.715302 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:28Z","lastTransitionTime":"2025-12-13T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.744857 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.744913 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.744947 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:28 crc kubenswrapper[4707]: E1213 06:55:28.745070 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:28 crc kubenswrapper[4707]: E1213 06:55:28.745162 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:28 crc kubenswrapper[4707]: E1213 06:55:28.745235 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.817414 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.817737 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.817749 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.817765 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.817777 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:28Z","lastTransitionTime":"2025-12-13T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.921053 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.921095 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.921108 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.921124 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.921135 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:28Z","lastTransitionTime":"2025-12-13T06:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.953493 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" event={"ID":"5d5a779a-7592-4da7-950b-3a0679d7a4bf","Type":"ContainerStarted","Data":"07bab14d70cf4aec6174e412c18e36ec4459ceec36c5b09bffa55fe19dcfc1a2"} Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.956272 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.958452 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8a59cadcc6366b85de59b16e7de5b8f1aa728727bfc8bfdf756a3026f8a68641"} Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.974913 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:28 crc kubenswrapper[4707]: I1213 06:55:28.988317 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.000761 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.009757 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b627378edc16df52550b7ee29b05ff370f0da087e511c979c72f677cc5d91ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.023573 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.023612 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.023624 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.023640 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.023651 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:29Z","lastTransitionTime":"2025-12-13T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.023712 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aefe869-3e59-4e2c-a755-4a727d68d3aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rqdnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.034941 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d8ca7-1728-4889-b700-27684212393f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ab5c7f562f8a78956d480ced61ee1ff36535adbf822c827071eaf16e225664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef91cfdee168cf2ee15b7d5568481cc380d4e4b8c4be32587e8cab097d26a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://827505d819824cd94bd1df8a36602336799d3681111b30a1249d29c4de62df54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70dd9071c002e3995bda9569fcfa3c7ea52934e628745695bd8107dd3c39626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.047057 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kmn45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4chz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kmn45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.056704 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chgvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73c86216-5dce-4edb-a7a7-7f2f569ff92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjnq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chgvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.064733 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4633ffb-aea7-4c92-951e-92309d8bc2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.076337 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.088660 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bed4c7e-8a92-483b-9aca-47b6cdfba745\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8482e5f939a824f858bf8dd2192e01d5a417e3635f1b5fe3e8f43fc303d16537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa3908988091d74d1ded56447cdada5eabbd8d0ac4d3ede46becc852ee1d9f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://292c2efd17bb51a91a2e76b5307dddeefb0b5805dc141282af307cb075c6b045\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T06:55:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 06:55:10.042482 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 06:55:10.042687 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 06:55:10.043776 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2914533816/tls.crt::/tmp/serving-cert-2914533816/tls.key\\\\\\\"\\\\nI1213 06:55:10.443185 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 06:55:10.444945 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 06:55:10.445002 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 06:55:10.445026 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 06:55:10.445031 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 06:55:10.451214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1213 06:55:10.451287 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1213 06:55:10.451303 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451341 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 06:55:10.451358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 06:55:10.451365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 06:55:10.451372 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1213 06:55:10.453554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71001e5091ac315959a1d8c05707b4e1698db42bc2e88a7f85ed698cb8e2ab05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.100496 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.111999 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d5a779a-7592-4da7-950b-3a0679d7a4bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ns45v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.125540 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.125591 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.125607 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.125628 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.125645 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:29Z","lastTransitionTime":"2025-12-13T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.132406 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75175fa9-a1b3-470e-a782-8edf5b3d464b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg6dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.145656 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.159743 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ff06f9d-c77a-4cc7-ad1e-b5a35178a101\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3f9d6bb6e7bdeb04a34109375b5702ed6af2305bc4b992f5d7021b317abdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14590eac1e9d7f49370c4ba3566a7f02ac45333b9543581c49c36521861a83f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57ed674f1cd4e51364c7fa4eb6565afdbd807407da9029bed4775df183b96de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcacf450402fceadabf0a9330c5b2c2fd91dee2d4347dfe37abf3a410de267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f955cbe5c2644f576bfa6fa9df930035d039aa0d90d70099d2b2349963f3a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.227885 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.227948 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.227992 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.228015 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.228032 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:29Z","lastTransitionTime":"2025-12-13T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.276982 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xbx52"] Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.277386 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:29 crc kubenswrapper[4707]: E1213 06:55:29.277456 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.289689 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d8ca7-1728-4889-b700-27684212393f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ab5c7f562f8a78956d480ced61ee1ff36535adbf822c827071eaf16e225664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef91cfdee168cf2ee15b7d5568481cc380d4e4b8c4be32587e8cab097d26a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://827505d819824cd94bd1df8a36602336799d3681111b30a1249d29c4de62df54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70dd9071c002e3995bda9569fcfa3c7ea52934e628745695bd8107dd3c39626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.302614 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.313325 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.325359 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.330343 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.330378 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.330391 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.330407 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.330419 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:29Z","lastTransitionTime":"2025-12-13T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.336254 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b627378edc16df52550b7ee29b05ff370f0da087e511c979c72f677cc5d91ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.354917 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aefe869-3e59-4e2c-a755-4a727d68d3aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rqdnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.357553 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz7vg\" (UniqueName: \"kubernetes.io/projected/cbd5a4ca-21f2-414a-a4c4-441959130e09-kube-api-access-xz7vg\") pod \"network-metrics-daemon-xbx52\" (UID: \"cbd5a4ca-21f2-414a-a4c4-441959130e09\") " pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.357582 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs\") pod \"network-metrics-daemon-xbx52\" (UID: \"cbd5a4ca-21f2-414a-a4c4-441959130e09\") " pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.367244 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.379210 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kmn45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4chz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kmn45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.388879 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chgvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73c86216-5dce-4edb-a7a7-7f2f569ff92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjnq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chgvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.398530 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4633ffb-aea7-4c92-951e-92309d8bc2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.408770 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbx52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbd5a4ca-21f2-414a-a4c4-441959130e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz7vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz7vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbx52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.423232 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bed4c7e-8a92-483b-9aca-47b6cdfba745\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8482e5f939a824f858bf8dd2192e01d5a417e3635f1b5fe3e8f43fc303d16537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa3908988091d74d1ded56447cdada5eabbd8d0ac4d3ede46becc852ee1d9f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://292c2efd17bb51a91a2e76b5307dddeefb0b5805dc141282af307cb075c6b045\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T06:55:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 06:55:10.042482 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 06:55:10.042687 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 06:55:10.043776 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2914533816/tls.crt::/tmp/serving-cert-2914533816/tls.key\\\\\\\"\\\\nI1213 06:55:10.443185 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 06:55:10.444945 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 06:55:10.445002 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 06:55:10.445026 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 06:55:10.445031 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 06:55:10.451214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1213 06:55:10.451287 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1213 06:55:10.451303 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451341 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 06:55:10.451358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 06:55:10.451365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 06:55:10.451372 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1213 06:55:10.453554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71001e5091ac315959a1d8c05707b4e1698db42bc2e88a7f85ed698cb8e2ab05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.432852 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.432931 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.432963 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.432983 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.432997 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:29Z","lastTransitionTime":"2025-12-13T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.435813 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.448353 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d5a779a-7592-4da7-950b-3a0679d7a4bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ns45v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.459024 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz7vg\" (UniqueName: \"kubernetes.io/projected/cbd5a4ca-21f2-414a-a4c4-441959130e09-kube-api-access-xz7vg\") pod \"network-metrics-daemon-xbx52\" (UID: \"cbd5a4ca-21f2-414a-a4c4-441959130e09\") " pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.459073 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs\") pod \"network-metrics-daemon-xbx52\" (UID: \"cbd5a4ca-21f2-414a-a4c4-441959130e09\") " pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:29 crc kubenswrapper[4707]: E1213 06:55:29.459293 4707 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 06:55:29 crc kubenswrapper[4707]: E1213 06:55:29.459397 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs podName:cbd5a4ca-21f2-414a-a4c4-441959130e09 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:29.959378882 +0000 UTC m=+38.456557929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs") pod "network-metrics-daemon-xbx52" (UID: "cbd5a4ca-21f2-414a-a4c4-441959130e09") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.465192 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75175fa9-a1b3-470e-a782-8edf5b3d464b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg6dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.474460 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.474502 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.474512 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.474528 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.474538 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:29Z","lastTransitionTime":"2025-12-13T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.482221 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz7vg\" (UniqueName: \"kubernetes.io/projected/cbd5a4ca-21f2-414a-a4c4-441959130e09-kube-api-access-xz7vg\") pod \"network-metrics-daemon-xbx52\" (UID: \"cbd5a4ca-21f2-414a-a4c4-441959130e09\") " pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.484279 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ff06f9d-c77a-4cc7-ad1e-b5a35178a101\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3f9d6bb6e7bdeb04a34109375b5702ed6af2305bc4b992f5d7021b317abdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14590eac1e9d7f49370c4ba3566a7f02ac45333b9543581c49c36521861a83f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57ed674f1cd4e51364c7fa4eb6565afdbd807407da9029bed4775df183b96de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcacf450402fceadabf0a9330c5b2c2fd91dee2d4347dfe37abf3a410de267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f955cbe5c2644f576bfa6fa9df930035d039aa0d90d70099d2b2349963f3a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: E1213 06:55:29.488034 4707 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"07dfa643-7b33-4289-a4de-483e92c58d5e\\\",\\\"systemUUID\\\":\\\"3c05be00-dcfd-4478-b7d9-3ddef2164ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.492841 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.492880 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.492890 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.492906 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.492930 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:29Z","lastTransitionTime":"2025-12-13T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.494841 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: E1213 06:55:29.504752 4707 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"07dfa643-7b33-4289-a4de-483e92c58d5e\\\",\\\"systemUUID\\\":\\\"3c05be00-dcfd-4478-b7d9-3ddef2164ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.509073 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.509110 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.509120 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.509138 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.509148 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:29Z","lastTransitionTime":"2025-12-13T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:29 crc kubenswrapper[4707]: E1213 06:55:29.519595 4707 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"07dfa643-7b33-4289-a4de-483e92c58d5e\\\",\\\"systemUUID\\\":\\\"3c05be00-dcfd-4478-b7d9-3ddef2164ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.524617 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.524683 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.524701 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.524724 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.524739 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:29Z","lastTransitionTime":"2025-12-13T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:29 crc kubenswrapper[4707]: E1213 06:55:29.536818 4707 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"07dfa643-7b33-4289-a4de-483e92c58d5e\\\",\\\"systemUUID\\\":\\\"3c05be00-dcfd-4478-b7d9-3ddef2164ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.545206 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.545235 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.545245 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.545260 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.545273 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:29Z","lastTransitionTime":"2025-12-13T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:29 crc kubenswrapper[4707]: E1213 06:55:29.558022 4707 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"07dfa643-7b33-4289-a4de-483e92c58d5e\\\",\\\"systemUUID\\\":\\\"3c05be00-dcfd-4478-b7d9-3ddef2164ce4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: E1213 06:55:29.558191 4707 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.559940 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.559977 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.559989 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.560003 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.560016 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:29Z","lastTransitionTime":"2025-12-13T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.667152 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.667192 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.667201 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.667216 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.667225 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:29Z","lastTransitionTime":"2025-12-13T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.769174 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.769220 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.769234 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.769249 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.769259 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:29Z","lastTransitionTime":"2025-12-13T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.871447 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.871517 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.871529 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.871548 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.871562 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:29Z","lastTransitionTime":"2025-12-13T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.962133 4707 generic.go:334] "Generic (PLEG): container finished" podID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerID="c3048549c12991cf593cee41418f3ac0b657563bac3eb4a98be4eb1b7fe17258" exitCode=0 Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.962186 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerDied","Data":"c3048549c12991cf593cee41418f3ac0b657563bac3eb4a98be4eb1b7fe17258"} Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.963496 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs\") pod \"network-metrics-daemon-xbx52\" (UID: \"cbd5a4ca-21f2-414a-a4c4-441959130e09\") " pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:29 crc kubenswrapper[4707]: E1213 06:55:29.963578 4707 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 06:55:29 crc kubenswrapper[4707]: E1213 06:55:29.963626 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs podName:cbd5a4ca-21f2-414a-a4c4-441959130e09 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:30.963612394 +0000 UTC m=+39.460791441 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs") pod "network-metrics-daemon-xbx52" (UID: "cbd5a4ca-21f2-414a-a4c4-441959130e09") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.963942 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" event={"ID":"f4633ffb-aea7-4c92-951e-92309d8bc2e6","Type":"ContainerStarted","Data":"975339e48230032bcc46170aac00ea1227491fa864a5dedfaeeb2aa990415d02"} Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.964245 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.973798 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.973837 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.973846 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.973861 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.973871 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:29Z","lastTransitionTime":"2025-12-13T06:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.980463 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ff06f9d-c77a-4cc7-ad1e-b5a35178a101\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3f9d6bb6e7bdeb04a34109375b5702ed6af2305bc4b992f5d7021b317abdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14590eac1e9d7f49370c4ba3566a7f02ac45333b9543581c49c36521861a83f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57ed674f1cd4e51364c7fa4eb6565afdbd807407da9029bed4775df183b96de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcacf450402fceadabf0a9330c5b2c2fd91dee2d4347dfe37abf3a410de267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f955cbe5c2644f576bfa6fa9df930035d039aa0d90d70099d2b2349963f3a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.989772 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:29 crc kubenswrapper[4707]: I1213 06:55:29.999361 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aefe869-3e59-4e2c-a755-4a727d68d3aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rqdnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.008081 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d8ca7-1728-4889-b700-27684212393f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ab5c7f562f8a78956d480ced61ee1ff36535adbf822c827071eaf16e225664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef91cfdee168cf2ee15b7d5568481cc380d4e4b8c4be32587e8cab097d26a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://827505d819824cd94bd1df8a36602336799d3681111b30a1249d29c4de62df54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70dd9071c002e3995bda9569fcfa3c7ea52934e628745695bd8107dd3c39626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.016031 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.022974 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.032414 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.042083 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b627378edc16df52550b7ee29b05ff370f0da087e511c979c72f677cc5d91ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.050509 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.060781 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kmn45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4chz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kmn45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.069095 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chgvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73c86216-5dce-4edb-a7a7-7f2f569ff92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjnq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chgvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.076068 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.076117 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.076131 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.076149 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.076160 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:30Z","lastTransitionTime":"2025-12-13T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.078556 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4633ffb-aea7-4c92-951e-92309d8bc2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.086134 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbx52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbd5a4ca-21f2-414a-a4c4-441959130e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz7vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz7vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbx52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.095929 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bed4c7e-8a92-483b-9aca-47b6cdfba745\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8482e5f939a824f858bf8dd2192e01d5a417e3635f1b5fe3e8f43fc303d16537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa3908988091d74d1ded56447cdada5eabbd8d0ac4d3ede46becc852ee1d9f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://292c2efd17bb51a91a2e76b5307dddeefb0b5805dc141282af307cb075c6b045\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T06:55:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 06:55:10.042482 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 06:55:10.042687 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 06:55:10.043776 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2914533816/tls.crt::/tmp/serving-cert-2914533816/tls.key\\\\\\\"\\\\nI1213 06:55:10.443185 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 06:55:10.444945 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 06:55:10.445002 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 06:55:10.445026 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 06:55:10.445031 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 06:55:10.451214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1213 06:55:10.451287 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1213 06:55:10.451303 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451341 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 06:55:10.451358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 06:55:10.451365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 06:55:10.451372 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1213 06:55:10.453554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71001e5091ac315959a1d8c05707b4e1698db42bc2e88a7f85ed698cb8e2ab05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.106184 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.115254 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d5a779a-7592-4da7-950b-3a0679d7a4bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ns45v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.138082 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75175fa9-a1b3-470e-a782-8edf5b3d464b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3048549c12991cf593cee41418f3ac0b657563bac3eb4a98be4eb1b7fe17258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3048549c12991cf593cee41418f3ac0b657563bac3eb4a98be4eb1b7fe17258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg6dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.146880 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.155574 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kmn45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5782049297d448b980c5e8cb63156bb34c8a95006091dc4e2437d3cc88913849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4chz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kmn45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.161619 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chgvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73c86216-5dce-4edb-a7a7-7f2f569ff92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjnq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chgvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.168044 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4633ffb-aea7-4c92-951e-92309d8bc2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.174678 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbx52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbd5a4ca-21f2-414a-a4c4-441959130e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz7vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz7vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbx52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.178011 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.178065 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.178076 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.178095 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.178108 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:30Z","lastTransitionTime":"2025-12-13T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.185169 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bed4c7e-8a92-483b-9aca-47b6cdfba745\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8482e5f939a824f858bf8dd2192e01d5a417e3635f1b5fe3e8f43fc303d16537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa3908988091d74d1ded56447cdada5eabbd8d0ac4d3ede46becc852ee1d9f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://292c2efd17bb51a91a2e76b5307dddeefb0b5805dc141282af307cb075c6b045\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a59cadcc6366b85de59b16e7de5b8f1aa728727bfc8bfdf756a3026f8a68641\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T06:55:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 06:55:10.042482 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 06:55:10.042687 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 06:55:10.043776 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2914533816/tls.crt::/tmp/serving-cert-2914533816/tls.key\\\\\\\"\\\\nI1213 06:55:10.443185 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 06:55:10.444945 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 06:55:10.445002 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 06:55:10.445026 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 06:55:10.445031 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 06:55:10.451214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1213 06:55:10.451287 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1213 06:55:10.451303 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451341 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 06:55:10.451358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 06:55:10.451365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 06:55:10.451372 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1213 06:55:10.453554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71001e5091ac315959a1d8c05707b4e1698db42bc2e88a7f85ed698cb8e2ab05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.193810 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a860dbd018550c8e8fb4c363e86f4e30b2242def74c904879506ac45661dbe3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.200679 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d5a779a-7592-4da7-950b-3a0679d7a4bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ns45v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.216378 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75175fa9-a1b3-470e-a782-8edf5b3d464b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3048549c12991cf593cee41418f3ac0b657563bac3eb4a98be4eb1b7fe17258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3048549c12991cf593cee41418f3ac0b657563bac3eb4a98be4eb1b7fe17258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg6dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.231365 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ff06f9d-c77a-4cc7-ad1e-b5a35178a101\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3f9d6bb6e7bdeb04a34109375b5702ed6af2305bc4b992f5d7021b317abdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14590eac1e9d7f49370c4ba3566a7f02ac45333b9543581c49c36521861a83f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57ed674f1cd4e51364c7fa4eb6565afdbd807407da9029bed4775df183b96de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcacf450402fceadabf0a9330c5b2c2fd91dee2d4347dfe37abf3a410de267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f955cbe5c2644f576bfa6fa9df930035d039aa0d90d70099d2b2349963f3a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.241447 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.248374 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b627378edc16df52550b7ee29b05ff370f0da087e511c979c72f677cc5d91ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.257878 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aefe869-3e59-4e2c-a755-4a727d68d3aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec9a0e60e7f6f0ecebce3430404f79b511f38b3608b2a7ed4c4ed704f55554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rqdnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.267318 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d8ca7-1728-4889-b700-27684212393f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ab5c7f562f8a78956d480ced61ee1ff36535adbf822c827071eaf16e225664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef91cfdee168cf2ee15b7d5568481cc380d4e4b8c4be32587e8cab097d26a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://827505d819824cd94bd1df8a36602336799d3681111b30a1249d29c4de62df54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70dd9071c002e3995bda9569fcfa3c7ea52934e628745695bd8107dd3c39626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.276126 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.280483 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.280521 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.280531 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.280546 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.280558 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:30Z","lastTransitionTime":"2025-12-13T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.284323 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://402d5e0ffda9d8c5d558fe1fa6d465a4112cb248ec280588304406305c4fa01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.294742 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.383423 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.383487 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.383505 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.383529 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.383546 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:30Z","lastTransitionTime":"2025-12-13T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.485774 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.485811 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.485822 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.485838 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.485848 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:30Z","lastTransitionTime":"2025-12-13T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.588120 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.588176 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.588190 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.588210 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.588224 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:30Z","lastTransitionTime":"2025-12-13T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.691442 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.691504 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.691521 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.691542 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.691558 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:30Z","lastTransitionTime":"2025-12-13T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.744459 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.744507 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.744545 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:30 crc kubenswrapper[4707]: E1213 06:55:30.744594 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.744463 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:30 crc kubenswrapper[4707]: E1213 06:55:30.744752 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:30 crc kubenswrapper[4707]: E1213 06:55:30.744888 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:55:30 crc kubenswrapper[4707]: E1213 06:55:30.745002 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.794435 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.794490 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.794502 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.794530 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.794544 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:30Z","lastTransitionTime":"2025-12-13T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.897275 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.897338 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.897556 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.897589 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.897614 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:30Z","lastTransitionTime":"2025-12-13T06:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.973710 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs\") pod \"network-metrics-daemon-xbx52\" (UID: \"cbd5a4ca-21f2-414a-a4c4-441959130e09\") " pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:30 crc kubenswrapper[4707]: E1213 06:55:30.974057 4707 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 06:55:30 crc kubenswrapper[4707]: E1213 06:55:30.974191 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs podName:cbd5a4ca-21f2-414a-a4c4-441959130e09 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:32.974161468 +0000 UTC m=+41.471340555 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs") pod "network-metrics-daemon-xbx52" (UID: "cbd5a4ca-21f2-414a-a4c4-441959130e09") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 06:55:30 crc kubenswrapper[4707]: I1213 06:55:30.997303 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ff06f9d-c77a-4cc7-ad1e-b5a35178a101\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3f9d6bb6e7bdeb04a34109375b5702ed6af2305bc4b992f5d7021b317abdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14590eac1e9d7f49370c4ba3566a7f02ac45333b9543581c49c36521861a83f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57ed674f1cd4e51364c7fa4eb6565afdbd807407da9029bed4775df183b96de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcacf450402fceadabf0a9330c5b2c2fd91dee2d4347dfe37abf3a410de267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f955cbe5c2644f576bfa6fa9df930035d039aa0d90d70099d2b2349963f3a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.000088 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.000123 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.000133 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.000148 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.000160 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:31Z","lastTransitionTime":"2025-12-13T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.009134 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.020274 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b627378edc16df52550b7ee29b05ff370f0da087e511c979c72f677cc5d91ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.037063 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aefe869-3e59-4e2c-a755-4a727d68d3aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec9a0e60e7f6f0ecebce3430404f79b511f38b3608b2a7ed4c4ed704f55554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rqdnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.050566 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d8ca7-1728-4889-b700-27684212393f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ab5c7f562f8a78956d480ced61ee1ff36535adbf822c827071eaf16e225664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef91cfdee168cf2ee15b7d5568481cc380d4e4b8c4be32587e8cab097d26a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://827505d819824cd94bd1df8a36602336799d3681111b30a1249d29c4de62df54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70dd9071c002e3995bda9569fcfa3c7ea52934e628745695bd8107dd3c39626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.067363 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.081263 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://402d5e0ffda9d8c5d558fe1fa6d465a4112cb248ec280588304406305c4fa01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.097147 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.102902 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.103024 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.103052 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.103081 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.103102 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:31Z","lastTransitionTime":"2025-12-13T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.111639 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.123249 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kmn45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5782049297d448b980c5e8cb63156bb34c8a95006091dc4e2437d3cc88913849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4chz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kmn45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.131729 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chgvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73c86216-5dce-4edb-a7a7-7f2f569ff92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c450638963071151c800107d2b11cbb93a48d24148fb0085479270ba9e7aec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjnq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chgvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.141421 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4633ffb-aea7-4c92-951e-92309d8bc2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.150310 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbx52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbd5a4ca-21f2-414a-a4c4-441959130e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz7vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz7vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbx52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.162357 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bed4c7e-8a92-483b-9aca-47b6cdfba745\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8482e5f939a824f858bf8dd2192e01d5a417e3635f1b5fe3e8f43fc303d16537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa3908988091d74d1ded56447cdada5eabbd8d0ac4d3ede46becc852ee1d9f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://292c2efd17bb51a91a2e76b5307dddeefb0b5805dc141282af307cb075c6b045\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a59cadcc6366b85de59b16e7de5b8f1aa728727bfc8bfdf756a3026f8a68641\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T06:55:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 06:55:10.042482 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 06:55:10.042687 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 06:55:10.043776 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2914533816/tls.crt::/tmp/serving-cert-2914533816/tls.key\\\\\\\"\\\\nI1213 06:55:10.443185 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 06:55:10.444945 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 06:55:10.445002 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 06:55:10.445026 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 06:55:10.445031 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 06:55:10.451214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1213 06:55:10.451287 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1213 06:55:10.451303 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451341 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 06:55:10.451358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 06:55:10.451365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 06:55:10.451372 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1213 06:55:10.453554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71001e5091ac315959a1d8c05707b4e1698db42bc2e88a7f85ed698cb8e2ab05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.176491 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a860dbd018550c8e8fb4c363e86f4e30b2242def74c904879506ac45661dbe3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.186057 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d5a779a-7592-4da7-950b-3a0679d7a4bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ns45v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.201449 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75175fa9-a1b3-470e-a782-8edf5b3d464b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3048549c12991cf593cee41418f3ac0b657563bac3eb4a98be4eb1b7fe17258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3048549c12991cf593cee41418f3ac0b657563bac3eb4a98be4eb1b7fe17258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg6dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.205102 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.205139 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.205149 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.205164 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.205175 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:31Z","lastTransitionTime":"2025-12-13T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.307699 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.307770 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.307787 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.307808 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.307825 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:31Z","lastTransitionTime":"2025-12-13T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.377644 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.377833 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:31 crc kubenswrapper[4707]: E1213 06:55:31.377879 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:55:47.377847155 +0000 UTC m=+55.875026212 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.377942 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.378014 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:31 crc kubenswrapper[4707]: E1213 06:55:31.378036 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.378056 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:31 crc kubenswrapper[4707]: E1213 06:55:31.378063 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 06:55:31 crc kubenswrapper[4707]: E1213 06:55:31.378084 4707 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:31 crc kubenswrapper[4707]: E1213 06:55:31.378127 4707 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 06:55:31 crc kubenswrapper[4707]: E1213 06:55:31.378193 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:47.378170603 +0000 UTC m=+55.875349680 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:31 crc kubenswrapper[4707]: E1213 06:55:31.378240 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 06:55:31 crc kubenswrapper[4707]: E1213 06:55:31.378250 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:47.378214174 +0000 UTC m=+55.875393291 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 06:55:31 crc kubenswrapper[4707]: E1213 06:55:31.378258 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 06:55:31 crc kubenswrapper[4707]: E1213 06:55:31.378277 4707 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:31 crc kubenswrapper[4707]: E1213 06:55:31.378281 4707 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 06:55:31 crc kubenswrapper[4707]: E1213 06:55:31.378316 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:47.378306746 +0000 UTC m=+55.875485903 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:31 crc kubenswrapper[4707]: E1213 06:55:31.378419 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:47.378350797 +0000 UTC m=+55.875529904 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.410987 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.411272 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.411292 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.411334 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.411355 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:31Z","lastTransitionTime":"2025-12-13T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.514662 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.514700 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.514709 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.514724 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.514735 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:31Z","lastTransitionTime":"2025-12-13T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.617247 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.617367 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.617380 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.617395 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.617406 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:31Z","lastTransitionTime":"2025-12-13T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.719938 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.720004 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.720020 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.720040 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.720054 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:31Z","lastTransitionTime":"2025-12-13T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.764923 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bed4c7e-8a92-483b-9aca-47b6cdfba745\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8482e5f939a824f858bf8dd2192e01d5a417e3635f1b5fe3e8f43fc303d16537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa3908988091d74d1ded56447cdada5eabbd8d0ac4d3ede46becc852ee1d9f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://292c2efd17bb51a91a2e76b5307dddeefb0b5805dc141282af307cb075c6b045\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a59cadcc6366b85de59b16e7de5b8f1aa728727bfc8bfdf756a3026f8a68641\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T06:55:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 06:55:10.042482 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 06:55:10.042687 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 06:55:10.043776 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2914533816/tls.crt::/tmp/serving-cert-2914533816/tls.key\\\\\\\"\\\\nI1213 06:55:10.443185 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 06:55:10.444945 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 06:55:10.445002 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 06:55:10.445026 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 06:55:10.445031 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 06:55:10.451214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1213 06:55:10.451287 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1213 06:55:10.451303 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451341 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 06:55:10.451358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 06:55:10.451365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 06:55:10.451372 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1213 06:55:10.453554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71001e5091ac315959a1d8c05707b4e1698db42bc2e88a7f85ed698cb8e2ab05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.776142 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a860dbd018550c8e8fb4c363e86f4e30b2242def74c904879506ac45661dbe3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.787105 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d5a779a-7592-4da7-950b-3a0679d7a4bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ns45v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.805349 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75175fa9-a1b3-470e-a782-8edf5b3d464b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3048549c12991cf593cee41418f3ac0b657563bac3eb4a98be4eb1b7fe17258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3048549c12991cf593cee41418f3ac0b657563bac3eb4a98be4eb1b7fe17258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg6dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.816689 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.822088 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.822135 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.822147 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.822165 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.822177 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:31Z","lastTransitionTime":"2025-12-13T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.843372 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ff06f9d-c77a-4cc7-ad1e-b5a35178a101\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3f9d6bb6e7bdeb04a34109375b5702ed6af2305bc4b992f5d7021b317abdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14590eac1e9d7f49370c4ba3566a7f02ac45333b9543581c49c36521861a83f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57ed674f1cd4e51364c7fa4eb6565afdbd807407da9029bed4775df183b96de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcacf450402fceadabf0a9330c5b2c2fd91dee2d4347dfe37abf3a410de267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f955cbe5c2644f576bfa6fa9df930035d039aa0d90d70099d2b2349963f3a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.866510 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.877474 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://402d5e0ffda9d8c5d558fe1fa6d465a4112cb248ec280588304406305c4fa01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.892301 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.902237 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b627378edc16df52550b7ee29b05ff370f0da087e511c979c72f677cc5d91ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.913929 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aefe869-3e59-4e2c-a755-4a727d68d3aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec9a0e60e7f6f0ecebce3430404f79b511f38b3608b2a7ed4c4ed704f55554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rqdnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.923887 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.923929 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.923942 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.923976 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.924004 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:31Z","lastTransitionTime":"2025-12-13T06:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.929732 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d8ca7-1728-4889-b700-27684212393f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ab5c7f562f8a78956d480ced61ee1ff36535adbf822c827071eaf16e225664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef91cfdee168cf2ee15b7d5568481cc380d4e4b8c4be32587e8cab097d26a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://827505d819824cd94bd1df8a36602336799d3681111b30a1249d29c4de62df54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70dd9071c002e3995bda9569fcfa3c7ea52934e628745695bd8107dd3c39626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.942304 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kmn45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5782049297d448b980c5e8cb63156bb34c8a95006091dc4e2437d3cc88913849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4chz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kmn45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.949418 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chgvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73c86216-5dce-4edb-a7a7-7f2f569ff92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c450638963071151c800107d2b11cbb93a48d24148fb0085479270ba9e7aec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjnq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chgvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.956761 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4633ffb-aea7-4c92-951e-92309d8bc2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.963090 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbx52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbd5a4ca-21f2-414a-a4c4-441959130e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz7vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz7vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbx52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.973124 4707 generic.go:334] "Generic (PLEG): container finished" podID="5aefe869-3e59-4e2c-a755-4a727d68d3aa" containerID="dec9a0e60e7f6f0ecebce3430404f79b511f38b3608b2a7ed4c4ed704f55554d" exitCode=0 Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.973175 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" event={"ID":"5aefe869-3e59-4e2c-a755-4a727d68d3aa","Type":"ContainerDied","Data":"dec9a0e60e7f6f0ecebce3430404f79b511f38b3608b2a7ed4c4ed704f55554d"} Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.975365 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.985908 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:31 crc kubenswrapper[4707]: I1213 06:55:31.995133 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://402d5e0ffda9d8c5d558fe1fa6d465a4112cb248ec280588304406305c4fa01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.004899 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.011866 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b627378edc16df52550b7ee29b05ff370f0da087e511c979c72f677cc5d91ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.024688 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aefe869-3e59-4e2c-a755-4a727d68d3aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec9a0e60e7f6f0ecebce3430404f79b511f38b3608b2a7ed4c4ed704f55554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec9a0e60e7f6f0ecebce3430404f79b511f38b3608b2a7ed4c4ed704f55554d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rqdnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.026170 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.026198 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.026206 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.026219 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.026228 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:32Z","lastTransitionTime":"2025-12-13T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.034528 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d8ca7-1728-4889-b700-27684212393f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ab5c7f562f8a78956d480ced61ee1ff36535adbf822c827071eaf16e225664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef91cfdee168cf2ee15b7d5568481cc380d4e4b8c4be32587e8cab097d26a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://827505d819824cd94bd1df8a36602336799d3681111b30a1249d29c4de62df54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70dd9071c002e3995bda9569fcfa3c7ea52934e628745695bd8107dd3c39626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.042026 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chgvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73c86216-5dce-4edb-a7a7-7f2f569ff92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c450638963071151c800107d2b11cbb93a48d24148fb0085479270ba9e7aec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjnq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chgvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.050415 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4633ffb-aea7-4c92-951e-92309d8bc2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.057442 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbx52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbd5a4ca-21f2-414a-a4c4-441959130e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz7vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz7vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbx52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.066395 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.076524 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kmn45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5782049297d448b980c5e8cb63156bb34c8a95006091dc4e2437d3cc88913849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4chz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kmn45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.084993 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a860dbd018550c8e8fb4c363e86f4e30b2242def74c904879506ac45661dbe3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.094028 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d5a779a-7592-4da7-950b-3a0679d7a4bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4wpzt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ns45v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.115889 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75175fa9-a1b3-470e-a782-8edf5b3d464b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3048549c12991cf593cee41418f3ac0b657563bac3eb4a98be4eb1b7fe17258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3048549c12991cf593cee41418f3ac0b657563bac3eb4a98be4eb1b7fe17258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjvt2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wg6dr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.127372 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bed4c7e-8a92-483b-9aca-47b6cdfba745\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8482e5f939a824f858bf8dd2192e01d5a417e3635f1b5fe3e8f43fc303d16537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa3908988091d74d1ded56447cdada5eabbd8d0ac4d3ede46becc852ee1d9f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://292c2efd17bb51a91a2e76b5307dddeefb0b5805dc141282af307cb075c6b045\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a59cadcc6366b85de59b16e7de5b8f1aa728727bfc8bfdf756a3026f8a68641\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-13T06:55:12Z\\\",\\\"message\\\":\\\"le observer\\\\nW1213 06:55:10.042482 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1213 06:55:10.042687 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1213 06:55:10.043776 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2914533816/tls.crt::/tmp/serving-cert-2914533816/tls.key\\\\\\\"\\\\nI1213 06:55:10.443185 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1213 06:55:10.444945 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1213 06:55:10.445002 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1213 06:55:10.445026 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1213 06:55:10.445031 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1213 06:55:10.451214 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1213 06:55:10.451287 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1213 06:55:10.451303 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451341 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1213 06:55:10.451351 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1213 06:55:10.451358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1213 06:55:10.451365 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1213 06:55:10.451372 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1213 06:55:10.453554 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71001e5091ac315959a1d8c05707b4e1698db42bc2e88a7f85ed698cb8e2ab05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.128802 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.128839 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.128850 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.128887 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.128898 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:32Z","lastTransitionTime":"2025-12-13T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.146682 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ff06f9d-c77a-4cc7-ad1e-b5a35178a101\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f3f9d6bb6e7bdeb04a34109375b5702ed6af2305bc4b992f5d7021b317abdd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14590eac1e9d7f49370c4ba3566a7f02ac45333b9543581c49c36521861a83f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57ed674f1cd4e51364c7fa4eb6565afdbd807407da9029bed4775df183b96de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcacf450402fceadabf0a9330c5b2c2fd91dee2d4347dfe37abf3a410de267a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f955cbe5c2644f576bfa6fa9df930035d039aa0d90d70099d2b2349963f3a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee8a2a21a80770dfe2d46eb535ab1f0967144878596355847018c4e9299a07ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff0dfa147fa7e92a8856eb07a8d940ac5c78e7ac85971dce7390821b3f92b37f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b760c3a7e59d6bbfc1ab9e57b7ec8775ece7c6cf07217e6389586065bee0c43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:54:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.155981 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.232185 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.232221 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.232230 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.232245 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.232256 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:32Z","lastTransitionTime":"2025-12-13T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.335345 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.335390 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.335403 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.335419 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.335430 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:32Z","lastTransitionTime":"2025-12-13T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.437925 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.438003 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.438023 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.438045 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.438060 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:32Z","lastTransitionTime":"2025-12-13T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.540155 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.540201 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.540213 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.540231 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.540243 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:32Z","lastTransitionTime":"2025-12-13T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.643254 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.643300 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.643312 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.643329 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.643343 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:32Z","lastTransitionTime":"2025-12-13T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.744574 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.744607 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.744612 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.744675 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:32 crc kubenswrapper[4707]: E1213 06:55:32.744802 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:32 crc kubenswrapper[4707]: E1213 06:55:32.744936 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:32 crc kubenswrapper[4707]: E1213 06:55:32.745051 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:55:32 crc kubenswrapper[4707]: E1213 06:55:32.745194 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.746029 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.746070 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.746083 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.746096 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.746105 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:32Z","lastTransitionTime":"2025-12-13T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.848714 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.848761 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.848772 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.848790 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.848802 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:32Z","lastTransitionTime":"2025-12-13T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.951384 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.951486 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.951508 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.951557 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:32 crc kubenswrapper[4707]: I1213 06:55:32.951577 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:32Z","lastTransitionTime":"2025-12-13T06:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.005883 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs\") pod \"network-metrics-daemon-xbx52\" (UID: \"cbd5a4ca-21f2-414a-a4c4-441959130e09\") " pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:33 crc kubenswrapper[4707]: E1213 06:55:33.006106 4707 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 06:55:33 crc kubenswrapper[4707]: E1213 06:55:33.006284 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs podName:cbd5a4ca-21f2-414a-a4c4-441959130e09 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:37.006256618 +0000 UTC m=+45.503435675 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs") pod "network-metrics-daemon-xbx52" (UID: "cbd5a4ca-21f2-414a-a4c4-441959130e09") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.053463 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.053530 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.053542 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.053560 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.053570 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:33Z","lastTransitionTime":"2025-12-13T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.156185 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.156223 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.156235 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.156253 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.156269 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:33Z","lastTransitionTime":"2025-12-13T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.258493 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.258532 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.258545 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.258562 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.258580 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:33Z","lastTransitionTime":"2025-12-13T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.361414 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.361442 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.361488 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.361503 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.361512 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:33Z","lastTransitionTime":"2025-12-13T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.464193 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.464234 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.464246 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.464263 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.464275 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:33Z","lastTransitionTime":"2025-12-13T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.567244 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.567322 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.567350 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.567381 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.567404 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:33Z","lastTransitionTime":"2025-12-13T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.670585 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.670620 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.670628 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.670642 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.670650 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:33Z","lastTransitionTime":"2025-12-13T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.773670 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.773711 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.773723 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.773740 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.773751 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:33Z","lastTransitionTime":"2025-12-13T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.876735 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.876992 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.877034 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.877053 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.877064 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:33Z","lastTransitionTime":"2025-12-13T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.978496 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.978522 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.978530 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.978544 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.978554 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:33Z","lastTransitionTime":"2025-12-13T06:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:33 crc kubenswrapper[4707]: I1213 06:55:33.984275 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"38db1a8bee53e32990dd92d4c651fb3feacce4b4318eb94188afb95fd06f7c35"} Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.081150 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.081183 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.081195 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.081211 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.081223 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:34Z","lastTransitionTime":"2025-12-13T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.183570 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.183596 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.183604 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.183616 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.183625 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:34Z","lastTransitionTime":"2025-12-13T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.285841 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.285885 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.285896 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.285912 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.285924 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:34Z","lastTransitionTime":"2025-12-13T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.388514 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.388552 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.388560 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.388574 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.388589 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:34Z","lastTransitionTime":"2025-12-13T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.492707 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.492989 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.492998 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.493013 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.493023 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:34Z","lastTransitionTime":"2025-12-13T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.595050 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.595085 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.595111 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.595129 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.595139 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:34Z","lastTransitionTime":"2025-12-13T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.697701 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.697799 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.697818 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.697840 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.697855 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:34Z","lastTransitionTime":"2025-12-13T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.744744 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.744876 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.745012 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:34 crc kubenswrapper[4707]: E1213 06:55:34.744906 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:55:34 crc kubenswrapper[4707]: E1213 06:55:34.745159 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:34 crc kubenswrapper[4707]: E1213 06:55:34.745339 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.745419 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:34 crc kubenswrapper[4707]: E1213 06:55:34.745598 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.800675 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.800715 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.800725 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.800737 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.800746 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:34Z","lastTransitionTime":"2025-12-13T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.903910 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.904004 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.904023 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.904049 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.904079 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:34Z","lastTransitionTime":"2025-12-13T06:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.988905 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerStarted","Data":"767ec9824a1b0a79b27ddbfacf8a6b826f923cacae2e04828df966a96703f4eb"} Dec 13 06:55:34 crc kubenswrapper[4707]: I1213 06:55:34.992286 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" event={"ID":"5d5a779a-7592-4da7-950b-3a0679d7a4bf","Type":"ContainerStarted","Data":"d5ea7cc235b5c536f80fef94835e4b3730c88ccbb99eabb01af7ea1c4ead79b2"} Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.006434 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.006495 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.006514 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.006537 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.006555 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:35Z","lastTransitionTime":"2025-12-13T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.109125 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.109155 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.109164 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.109177 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.109186 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:35Z","lastTransitionTime":"2025-12-13T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.212105 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.212510 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.212683 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.212857 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.213017 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:35Z","lastTransitionTime":"2025-12-13T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.317356 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.317405 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.317417 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.317435 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.317447 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:35Z","lastTransitionTime":"2025-12-13T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.420102 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.420376 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.420472 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.420564 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.420659 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:35Z","lastTransitionTime":"2025-12-13T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.523151 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.523198 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.523212 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.523231 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.523246 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:35Z","lastTransitionTime":"2025-12-13T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.626016 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.626058 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.626071 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.626089 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.626102 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:35Z","lastTransitionTime":"2025-12-13T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.729439 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.729481 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.729490 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.729507 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.729522 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:35Z","lastTransitionTime":"2025-12-13T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.832222 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.832846 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.833009 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.833157 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.833272 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:35Z","lastTransitionTime":"2025-12-13T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.934777 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.934814 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.934825 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.934843 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.934853 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:35Z","lastTransitionTime":"2025-12-13T06:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:35 crc kubenswrapper[4707]: I1213 06:55:35.995673 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" event={"ID":"f4633ffb-aea7-4c92-951e-92309d8bc2e6","Type":"ContainerStarted","Data":"df5ac426262acddad5f43b2d6b7a54fdf3c905fb4e5cc114153c65338fc4b56d"} Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.000783 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" event={"ID":"5aefe869-3e59-4e2c-a755-4a727d68d3aa","Type":"ContainerStarted","Data":"a94d788bad528e7aed3ef9711560528b524d52d274c3a2b45bc5b460f67c95af"} Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.017814 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aefe869-3e59-4e2c-a755-4a727d68d3aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec9a0e60e7f6f0ecebce3430404f79b511f38b3608b2a7ed4c4ed704f55554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dec9a0e60e7f6f0ecebce3430404f79b511f38b3608b2a7ed4c4ed704f55554d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-13T06:55:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r2l8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rqdnb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T06:55:36Z is after 2025-08-24T17:21:41Z" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.031671 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e4d8ca7-1728-4889-b700-27684212393f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:54:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7ab5c7f562f8a78956d480ced61ee1ff36535adbf822c827071eaf16e225664\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef91cfdee168cf2ee15b7d5568481cc380d4e4b8c4be32587e8cab097d26a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://827505d819824cd94bd1df8a36602336799d3681111b30a1249d29c4de62df54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70dd9071c002e3995bda9569fcfa3c7ea52934e628745695bd8107dd3c39626\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:54:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:54:53Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T06:55:36Z is after 2025-08-24T17:21:41Z" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.037369 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.037414 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.037425 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.037445 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.037458 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:36Z","lastTransitionTime":"2025-12-13T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.045523 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T06:55:36Z is after 2025-08-24T17:21:41Z" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.058225 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://402d5e0ffda9d8c5d558fe1fa6d465a4112cb248ec280588304406305c4fa01f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T06:55:36Z is after 2025-08-24T17:21:41Z" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.068683 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T06:55:36Z is after 2025-08-24T17:21:41Z" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.078406 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jwkdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97ae1fcf-604e-4b39-bbb8-7e35b00d2509\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b627378edc16df52550b7ee29b05ff370f0da087e511c979c72f677cc5d91ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kb7dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jwkdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T06:55:36Z is after 2025-08-24T17:21:41Z" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.088724 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T06:55:36Z is after 2025-08-24T17:21:41Z" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.100295 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kmn45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c99dbcda-959f-4e7c-80ef-ba6f268ee22d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5782049297d448b980c5e8cb63156bb34c8a95006091dc4e2437d3cc88913849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4chz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kmn45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T06:55:36Z is after 2025-08-24T17:21:41Z" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.109112 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chgvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73c86216-5dce-4edb-a7a7-7f2f569ff92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c450638963071151c800107d2b11cbb93a48d24148fb0085479270ba9e7aec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-13T06:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zjnq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chgvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T06:55:36Z is after 2025-08-24T17:21:41Z" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.121551 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4633ffb-aea7-4c92-951e-92309d8bc2e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bsqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bzfmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T06:55:36Z is after 2025-08-24T17:21:41Z" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.132380 4707 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xbx52" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbd5a4ca-21f2-414a-a4c4-441959130e09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-13T06:55:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz7vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz7vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-13T06:55:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xbx52\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-13T06:55:36Z is after 2025-08-24T17:21:41Z" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.139925 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.139991 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.140008 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.140025 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.140033 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:36Z","lastTransitionTime":"2025-12-13T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.242519 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.242568 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.242584 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.242604 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.242620 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:36Z","lastTransitionTime":"2025-12-13T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.281841 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.28182187 podStartE2EDuration="14.28182187s" podCreationTimestamp="2025-12-13 06:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:55:36.263214766 +0000 UTC m=+44.760393853" watchObservedRunningTime="2025-12-13 06:55:36.28182187 +0000 UTC m=+44.779000917" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.347025 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.347070 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.347082 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.347099 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.347113 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:36Z","lastTransitionTime":"2025-12-13T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.348133 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podStartSLOduration=21.348117229 podStartE2EDuration="21.348117229s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:55:36.29317666 +0000 UTC m=+44.790355717" watchObservedRunningTime="2025-12-13 06:55:36.348117229 +0000 UTC m=+44.845296276" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.377160 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=11.377142131 podStartE2EDuration="11.377142131s" podCreationTimestamp="2025-12-13 06:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:55:36.376340111 +0000 UTC m=+44.873519168" watchObservedRunningTime="2025-12-13 06:55:36.377142131 +0000 UTC m=+44.874321178" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.399516 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-jwkdp" podStartSLOduration=24.399501463 podStartE2EDuration="24.399501463s" podCreationTimestamp="2025-12-13 06:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:55:36.399493073 +0000 UTC m=+44.896672120" watchObservedRunningTime="2025-12-13 06:55:36.399501463 +0000 UTC m=+44.896680510" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.449808 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.449881 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.449894 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.449909 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.449918 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:36Z","lastTransitionTime":"2025-12-13T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.451778 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=14.451768808 podStartE2EDuration="14.451768808s" podCreationTimestamp="2025-12-13 06:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:55:36.451495682 +0000 UTC m=+44.948674729" watchObservedRunningTime="2025-12-13 06:55:36.451768808 +0000 UTC m=+44.948947855" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.522031 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-kmn45" podStartSLOduration=21.522011422 podStartE2EDuration="21.522011422s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:55:36.521334336 +0000 UTC m=+45.018513383" watchObservedRunningTime="2025-12-13 06:55:36.522011422 +0000 UTC m=+45.019190469" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.543813 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-chgvn" podStartSLOduration=22.543796701 podStartE2EDuration="22.543796701s" podCreationTimestamp="2025-12-13 06:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:55:36.532301397 +0000 UTC m=+45.029480444" watchObservedRunningTime="2025-12-13 06:55:36.543796701 +0000 UTC m=+45.040975748" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.551972 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.552013 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.552022 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.552036 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.552046 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:36Z","lastTransitionTime":"2025-12-13T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.654097 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.654129 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.654137 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.654149 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.654159 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:36Z","lastTransitionTime":"2025-12-13T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.744279 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.744322 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.744290 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:36 crc kubenswrapper[4707]: E1213 06:55:36.744425 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.744532 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:36 crc kubenswrapper[4707]: E1213 06:55:36.744605 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:36 crc kubenswrapper[4707]: E1213 06:55:36.744686 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:36 crc kubenswrapper[4707]: E1213 06:55:36.744795 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.756073 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.756104 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.756112 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.756127 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.756136 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:36Z","lastTransitionTime":"2025-12-13T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.857902 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.857946 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.857989 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.858004 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.858015 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:36Z","lastTransitionTime":"2025-12-13T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.960742 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.960780 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.960791 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.960807 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:36 crc kubenswrapper[4707]: I1213 06:55:36.960818 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:36Z","lastTransitionTime":"2025-12-13T06:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.004759 4707 generic.go:334] "Generic (PLEG): container finished" podID="5aefe869-3e59-4e2c-a755-4a727d68d3aa" containerID="a94d788bad528e7aed3ef9711560528b524d52d274c3a2b45bc5b460f67c95af" exitCode=0 Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.004811 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" event={"ID":"5aefe869-3e59-4e2c-a755-4a727d68d3aa","Type":"ContainerDied","Data":"a94d788bad528e7aed3ef9711560528b524d52d274c3a2b45bc5b460f67c95af"} Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.007296 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerStarted","Data":"1e88ec0cfadd022acfdd5c85374f6d815597557529db009d9347695612d8a8d9"} Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.031749 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bzfmm" podStartSLOduration=22.031733125 podStartE2EDuration="22.031733125s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:55:37.031384736 +0000 UTC m=+45.528563783" watchObservedRunningTime="2025-12-13 06:55:37.031733125 +0000 UTC m=+45.528912172" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.048027 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs\") pod \"network-metrics-daemon-xbx52\" (UID: \"cbd5a4ca-21f2-414a-a4c4-441959130e09\") " pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:37 crc kubenswrapper[4707]: E1213 06:55:37.048139 4707 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 06:55:37 crc kubenswrapper[4707]: E1213 06:55:37.048184 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs podName:cbd5a4ca-21f2-414a-a4c4-441959130e09 nodeName:}" failed. No retries permitted until 2025-12-13 06:55:45.048170446 +0000 UTC m=+53.545349493 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs") pod "network-metrics-daemon-xbx52" (UID: "cbd5a4ca-21f2-414a-a4c4-441959130e09") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.062834 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.062859 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.062890 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.062903 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.062912 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:37Z","lastTransitionTime":"2025-12-13T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.171311 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.171386 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.171416 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.171447 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.171472 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:37Z","lastTransitionTime":"2025-12-13T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.276004 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.276049 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.276060 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.276076 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.276085 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:37Z","lastTransitionTime":"2025-12-13T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.378475 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.378506 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.378517 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.378532 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.378542 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:37Z","lastTransitionTime":"2025-12-13T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.481052 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.481089 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.481099 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.481113 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.481123 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:37Z","lastTransitionTime":"2025-12-13T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.583853 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.583888 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.583900 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.583914 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.583925 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:37Z","lastTransitionTime":"2025-12-13T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.686461 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.686510 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.686523 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.686540 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.686551 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:37Z","lastTransitionTime":"2025-12-13T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.789419 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.789456 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.789465 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.789478 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.789487 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:37Z","lastTransitionTime":"2025-12-13T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.894521 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.894556 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.894564 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.894579 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.894596 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:37Z","lastTransitionTime":"2025-12-13T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.996312 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.996351 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.996361 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.996378 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:37 crc kubenswrapper[4707]: I1213 06:55:37.996389 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:37Z","lastTransitionTime":"2025-12-13T06:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.012656 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerStarted","Data":"230d17f3f2abe60964e8980445b9843d73950af2664a639da41c7afe49cc4d07"} Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.014920 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" event={"ID":"5aefe869-3e59-4e2c-a755-4a727d68d3aa","Type":"ContainerStarted","Data":"7a9248e990f77b9c324cdb81589f7c73e8f7ae0cc481fc742098c4f838f526c6"} Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.098890 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.098926 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.098935 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.098973 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.098985 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:38Z","lastTransitionTime":"2025-12-13T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.201670 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.201745 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.201764 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.201782 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.201793 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:38Z","lastTransitionTime":"2025-12-13T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.304438 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.304482 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.304495 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.304511 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.304523 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:38Z","lastTransitionTime":"2025-12-13T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.406805 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.406837 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.406846 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.406861 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.406870 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:38Z","lastTransitionTime":"2025-12-13T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.509615 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.509661 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.509672 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.509690 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.509700 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:38Z","lastTransitionTime":"2025-12-13T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.611705 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.611747 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.611758 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.611832 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.611847 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:38Z","lastTransitionTime":"2025-12-13T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.714218 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.714244 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.714264 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.714310 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.714319 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:38Z","lastTransitionTime":"2025-12-13T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.744261 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:38 crc kubenswrapper[4707]: E1213 06:55:38.744382 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.744261 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:38 crc kubenswrapper[4707]: E1213 06:55:38.744483 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.744281 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.744269 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:38 crc kubenswrapper[4707]: E1213 06:55:38.744626 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:38 crc kubenswrapper[4707]: E1213 06:55:38.744699 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.828088 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.828135 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.828146 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.828173 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.828185 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:38Z","lastTransitionTime":"2025-12-13T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.929890 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.929931 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.929945 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.929982 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:38 crc kubenswrapper[4707]: I1213 06:55:38.929998 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:38Z","lastTransitionTime":"2025-12-13T06:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.019939 4707 generic.go:334] "Generic (PLEG): container finished" podID="5aefe869-3e59-4e2c-a755-4a727d68d3aa" containerID="7a9248e990f77b9c324cdb81589f7c73e8f7ae0cc481fc742098c4f838f526c6" exitCode=0 Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.020007 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" event={"ID":"5aefe869-3e59-4e2c-a755-4a727d68d3aa","Type":"ContainerDied","Data":"7a9248e990f77b9c324cdb81589f7c73e8f7ae0cc481fc742098c4f838f526c6"} Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.032268 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.032305 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.032317 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.032332 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.032342 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:39Z","lastTransitionTime":"2025-12-13T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.135571 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.135603 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.135611 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.135625 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.135635 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:39Z","lastTransitionTime":"2025-12-13T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.238758 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.238809 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.238826 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.238850 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.238863 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:39Z","lastTransitionTime":"2025-12-13T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.340944 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.341015 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.341034 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.341052 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.341063 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:39Z","lastTransitionTime":"2025-12-13T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.443290 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.443332 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.443343 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.443360 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.443372 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:39Z","lastTransitionTime":"2025-12-13T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.546041 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.546075 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.546084 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.546097 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.546118 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:39Z","lastTransitionTime":"2025-12-13T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.648772 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.648814 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.648824 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.648838 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.648849 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:39Z","lastTransitionTime":"2025-12-13T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.708045 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.708086 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.708095 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.708111 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.708121 4707 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-13T06:55:39Z","lastTransitionTime":"2025-12-13T06:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.767455 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm"] Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.767851 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.769581 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.769740 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.769806 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.769756 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.875908 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4a35039c-3e59-48b8-bef9-42b8ee2439c4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7tmxm\" (UID: \"4a35039c-3e59-48b8-bef9-42b8ee2439c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.875997 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a35039c-3e59-48b8-bef9-42b8ee2439c4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7tmxm\" (UID: \"4a35039c-3e59-48b8-bef9-42b8ee2439c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.876034 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a35039c-3e59-48b8-bef9-42b8ee2439c4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7tmxm\" (UID: \"4a35039c-3e59-48b8-bef9-42b8ee2439c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.876068 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a35039c-3e59-48b8-bef9-42b8ee2439c4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7tmxm\" (UID: \"4a35039c-3e59-48b8-bef9-42b8ee2439c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.876131 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4a35039c-3e59-48b8-bef9-42b8ee2439c4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7tmxm\" (UID: \"4a35039c-3e59-48b8-bef9-42b8ee2439c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.976766 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4a35039c-3e59-48b8-bef9-42b8ee2439c4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7tmxm\" (UID: \"4a35039c-3e59-48b8-bef9-42b8ee2439c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.976893 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4a35039c-3e59-48b8-bef9-42b8ee2439c4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7tmxm\" (UID: \"4a35039c-3e59-48b8-bef9-42b8ee2439c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.976931 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a35039c-3e59-48b8-bef9-42b8ee2439c4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7tmxm\" (UID: \"4a35039c-3e59-48b8-bef9-42b8ee2439c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.977014 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a35039c-3e59-48b8-bef9-42b8ee2439c4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7tmxm\" (UID: \"4a35039c-3e59-48b8-bef9-42b8ee2439c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.977051 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a35039c-3e59-48b8-bef9-42b8ee2439c4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7tmxm\" (UID: \"4a35039c-3e59-48b8-bef9-42b8ee2439c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.977071 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4a35039c-3e59-48b8-bef9-42b8ee2439c4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7tmxm\" (UID: \"4a35039c-3e59-48b8-bef9-42b8ee2439c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.977034 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4a35039c-3e59-48b8-bef9-42b8ee2439c4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7tmxm\" (UID: \"4a35039c-3e59-48b8-bef9-42b8ee2439c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.978109 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a35039c-3e59-48b8-bef9-42b8ee2439c4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7tmxm\" (UID: \"4a35039c-3e59-48b8-bef9-42b8ee2439c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:39 crc kubenswrapper[4707]: I1213 06:55:39.983518 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a35039c-3e59-48b8-bef9-42b8ee2439c4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7tmxm\" (UID: \"4a35039c-3e59-48b8-bef9-42b8ee2439c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:40 crc kubenswrapper[4707]: I1213 06:55:40.011318 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a35039c-3e59-48b8-bef9-42b8ee2439c4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7tmxm\" (UID: \"4a35039c-3e59-48b8-bef9-42b8ee2439c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:40 crc kubenswrapper[4707]: I1213 06:55:40.027097 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerStarted","Data":"a57d7bf1733c9f6cbd1a3a4097fc43695cedac944f8b58559ed648f670ec467e"} Dec 13 06:55:40 crc kubenswrapper[4707]: I1213 06:55:40.027148 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerStarted","Data":"70fb495b484807ae9dfc81f2ac8873535d393248a9347f981767742b33fa8e87"} Dec 13 06:55:40 crc kubenswrapper[4707]: I1213 06:55:40.029177 4707 generic.go:334] "Generic (PLEG): container finished" podID="5aefe869-3e59-4e2c-a755-4a727d68d3aa" containerID="a3cfeabe4274d6a01ae764bdb24ccb52915c0b6f3bcbd6050e5ff8eed9075188" exitCode=0 Dec 13 06:55:40 crc kubenswrapper[4707]: I1213 06:55:40.029212 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" event={"ID":"5aefe869-3e59-4e2c-a755-4a727d68d3aa","Type":"ContainerDied","Data":"a3cfeabe4274d6a01ae764bdb24ccb52915c0b6f3bcbd6050e5ff8eed9075188"} Dec 13 06:55:40 crc kubenswrapper[4707]: I1213 06:55:40.084125 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" Dec 13 06:55:40 crc kubenswrapper[4707]: W1213 06:55:40.095199 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a35039c_3e59_48b8_bef9_42b8ee2439c4.slice/crio-607ae6bc5b06b647bc0851df05eaec06f1e92b81dc7d407fb45683c833bae656 WatchSource:0}: Error finding container 607ae6bc5b06b647bc0851df05eaec06f1e92b81dc7d407fb45683c833bae656: Status 404 returned error can't find the container with id 607ae6bc5b06b647bc0851df05eaec06f1e92b81dc7d407fb45683c833bae656 Dec 13 06:55:40 crc kubenswrapper[4707]: I1213 06:55:40.744353 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:40 crc kubenswrapper[4707]: I1213 06:55:40.744432 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:40 crc kubenswrapper[4707]: I1213 06:55:40.744384 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:40 crc kubenswrapper[4707]: I1213 06:55:40.744370 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:40 crc kubenswrapper[4707]: E1213 06:55:40.744505 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:55:40 crc kubenswrapper[4707]: E1213 06:55:40.744580 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:40 crc kubenswrapper[4707]: E1213 06:55:40.744650 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:40 crc kubenswrapper[4707]: E1213 06:55:40.744703 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:40 crc kubenswrapper[4707]: I1213 06:55:40.880024 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 13 06:55:40 crc kubenswrapper[4707]: I1213 06:55:40.894280 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 13 06:55:41 crc kubenswrapper[4707]: I1213 06:55:41.039270 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" event={"ID":"4a35039c-3e59-48b8-bef9-42b8ee2439c4","Type":"ContainerStarted","Data":"607ae6bc5b06b647bc0851df05eaec06f1e92b81dc7d407fb45683c833bae656"} Dec 13 06:55:41 crc kubenswrapper[4707]: I1213 06:55:41.648573 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 06:55:41 crc kubenswrapper[4707]: I1213 06:55:41.662612 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=1.662588784 podStartE2EDuration="1.662588784s" podCreationTimestamp="2025-12-13 06:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:55:41.661523448 +0000 UTC m=+50.158702505" watchObservedRunningTime="2025-12-13 06:55:41.662588784 +0000 UTC m=+50.159767881" Dec 13 06:55:42 crc kubenswrapper[4707]: I1213 06:55:42.744753 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:42 crc kubenswrapper[4707]: I1213 06:55:42.744816 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:42 crc kubenswrapper[4707]: I1213 06:55:42.744753 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:42 crc kubenswrapper[4707]: E1213 06:55:42.744917 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:42 crc kubenswrapper[4707]: I1213 06:55:42.744765 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:42 crc kubenswrapper[4707]: E1213 06:55:42.745012 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:42 crc kubenswrapper[4707]: E1213 06:55:42.745047 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:42 crc kubenswrapper[4707]: E1213 06:55:42.745077 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:55:44 crc kubenswrapper[4707]: I1213 06:55:44.744932 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:44 crc kubenswrapper[4707]: I1213 06:55:44.745089 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:44 crc kubenswrapper[4707]: I1213 06:55:44.745092 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:44 crc kubenswrapper[4707]: E1213 06:55:44.745603 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:44 crc kubenswrapper[4707]: E1213 06:55:44.745396 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:44 crc kubenswrapper[4707]: E1213 06:55:44.745653 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:55:44 crc kubenswrapper[4707]: I1213 06:55:44.745194 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:44 crc kubenswrapper[4707]: E1213 06:55:44.745739 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:45 crc kubenswrapper[4707]: I1213 06:55:45.056705 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerStarted","Data":"9ef491ec513059679af28fc2fb363e749b4299a9d2e73359ea9bc7e2ac9b155d"} Dec 13 06:55:45 crc kubenswrapper[4707]: I1213 06:55:45.072440 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs\") pod \"network-metrics-daemon-xbx52\" (UID: \"cbd5a4ca-21f2-414a-a4c4-441959130e09\") " pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:45 crc kubenswrapper[4707]: E1213 06:55:45.072581 4707 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 06:55:45 crc kubenswrapper[4707]: E1213 06:55:45.072633 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs podName:cbd5a4ca-21f2-414a-a4c4-441959130e09 nodeName:}" failed. No retries permitted until 2025-12-13 06:56:01.07261718 +0000 UTC m=+69.569796227 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs") pod "network-metrics-daemon-xbx52" (UID: "cbd5a4ca-21f2-414a-a4c4-441959130e09") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 06:55:46 crc kubenswrapper[4707]: I1213 06:55:46.063231 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" event={"ID":"5aefe869-3e59-4e2c-a755-4a727d68d3aa","Type":"ContainerStarted","Data":"20d47b69a7a70ce8b58fc0e7d6f42c26eeb22b2a0c06928ba602923627a069a2"} Dec 13 06:55:46 crc kubenswrapper[4707]: I1213 06:55:46.065323 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" event={"ID":"4a35039c-3e59-48b8-bef9-42b8ee2439c4","Type":"ContainerStarted","Data":"59a5490e5d6a6ef2dff00dd17f2f7e656ff511a828d63c2b72f56ef974482402"} Dec 13 06:55:46 crc kubenswrapper[4707]: I1213 06:55:46.744100 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:46 crc kubenswrapper[4707]: I1213 06:55:46.744150 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:46 crc kubenswrapper[4707]: E1213 06:55:46.744261 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:46 crc kubenswrapper[4707]: I1213 06:55:46.744279 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:46 crc kubenswrapper[4707]: I1213 06:55:46.744305 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:46 crc kubenswrapper[4707]: E1213 06:55:46.744463 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:55:46 crc kubenswrapper[4707]: E1213 06:55:46.744556 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:46 crc kubenswrapper[4707]: E1213 06:55:46.744618 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:47 crc kubenswrapper[4707]: I1213 06:55:47.404548 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:55:47 crc kubenswrapper[4707]: I1213 06:55:47.404670 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:47 crc kubenswrapper[4707]: E1213 06:55:47.404736 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.404701625 +0000 UTC m=+87.901880682 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:55:47 crc kubenswrapper[4707]: I1213 06:55:47.404791 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:47 crc kubenswrapper[4707]: E1213 06:55:47.404814 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 06:55:47 crc kubenswrapper[4707]: I1213 06:55:47.404825 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:47 crc kubenswrapper[4707]: E1213 06:55:47.404833 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 06:55:47 crc kubenswrapper[4707]: E1213 06:55:47.404845 4707 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:47 crc kubenswrapper[4707]: I1213 06:55:47.404856 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:47 crc kubenswrapper[4707]: E1213 06:55:47.404888 4707 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 06:55:47 crc kubenswrapper[4707]: E1213 06:55:47.404897 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.404873099 +0000 UTC m=+87.902052146 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:47 crc kubenswrapper[4707]: E1213 06:55:47.404983 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 13 06:55:47 crc kubenswrapper[4707]: E1213 06:55:47.404997 4707 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 13 06:55:47 crc kubenswrapper[4707]: E1213 06:55:47.405005 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.404993542 +0000 UTC m=+87.902172589 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 13 06:55:47 crc kubenswrapper[4707]: E1213 06:55:47.405008 4707 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:47 crc kubenswrapper[4707]: E1213 06:55:47.405047 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.405040523 +0000 UTC m=+87.902219570 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 13 06:55:47 crc kubenswrapper[4707]: E1213 06:55:47.405050 4707 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 06:55:47 crc kubenswrapper[4707]: E1213 06:55:47.405085 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.405078304 +0000 UTC m=+87.902257351 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 13 06:55:48 crc kubenswrapper[4707]: I1213 06:55:48.075369 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerStarted","Data":"49e3de166d57b13c2482a740c648ea5a0e884f61418cb7d2ef7eb8699aac2db5"} Dec 13 06:55:48 crc kubenswrapper[4707]: I1213 06:55:48.094128 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7tmxm" podStartSLOduration=34.094113109 podStartE2EDuration="34.094113109s" podCreationTimestamp="2025-12-13 06:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:55:47.085976172 +0000 UTC m=+55.583155219" watchObservedRunningTime="2025-12-13 06:55:48.094113109 +0000 UTC m=+56.591292156" Dec 13 06:55:48 crc kubenswrapper[4707]: I1213 06:55:48.744006 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:48 crc kubenswrapper[4707]: I1213 06:55:48.744006 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:48 crc kubenswrapper[4707]: I1213 06:55:48.744019 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:48 crc kubenswrapper[4707]: E1213 06:55:48.744122 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:48 crc kubenswrapper[4707]: I1213 06:55:48.744154 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:48 crc kubenswrapper[4707]: E1213 06:55:48.744225 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:55:48 crc kubenswrapper[4707]: E1213 06:55:48.744332 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:48 crc kubenswrapper[4707]: E1213 06:55:48.744378 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:50 crc kubenswrapper[4707]: I1213 06:55:50.086308 4707 generic.go:334] "Generic (PLEG): container finished" podID="5aefe869-3e59-4e2c-a755-4a727d68d3aa" containerID="20d47b69a7a70ce8b58fc0e7d6f42c26eeb22b2a0c06928ba602923627a069a2" exitCode=0 Dec 13 06:55:50 crc kubenswrapper[4707]: I1213 06:55:50.086357 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" event={"ID":"5aefe869-3e59-4e2c-a755-4a727d68d3aa","Type":"ContainerDied","Data":"20d47b69a7a70ce8b58fc0e7d6f42c26eeb22b2a0c06928ba602923627a069a2"} Dec 13 06:55:50 crc kubenswrapper[4707]: I1213 06:55:50.744138 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:50 crc kubenswrapper[4707]: I1213 06:55:50.744170 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:50 crc kubenswrapper[4707]: E1213 06:55:50.744284 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:50 crc kubenswrapper[4707]: I1213 06:55:50.744323 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:50 crc kubenswrapper[4707]: I1213 06:55:50.744339 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:50 crc kubenswrapper[4707]: E1213 06:55:50.744419 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:50 crc kubenswrapper[4707]: E1213 06:55:50.744516 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:50 crc kubenswrapper[4707]: E1213 06:55:50.744571 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:55:52 crc kubenswrapper[4707]: I1213 06:55:52.744456 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:52 crc kubenswrapper[4707]: E1213 06:55:52.744845 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:55:52 crc kubenswrapper[4707]: I1213 06:55:52.744977 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:52 crc kubenswrapper[4707]: E1213 06:55:52.745091 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:52 crc kubenswrapper[4707]: I1213 06:55:52.745158 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:52 crc kubenswrapper[4707]: I1213 06:55:52.745190 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:52 crc kubenswrapper[4707]: E1213 06:55:52.745500 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:52 crc kubenswrapper[4707]: E1213 06:55:52.745206 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:53 crc kubenswrapper[4707]: I1213 06:55:53.103061 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerStarted","Data":"435016b9a8aedf8c26700b1279b7dc38ded11ab6c7d147f5cc133c3a05b27e9e"} Dec 13 06:55:53 crc kubenswrapper[4707]: I1213 06:55:53.103884 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:53 crc kubenswrapper[4707]: I1213 06:55:53.103996 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:53 crc kubenswrapper[4707]: I1213 06:55:53.104008 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:53 crc kubenswrapper[4707]: I1213 06:55:53.114259 4707 generic.go:334] "Generic (PLEG): container finished" podID="5aefe869-3e59-4e2c-a755-4a727d68d3aa" containerID="1f8b8d0d7fbdf8cdb76348e8a3f80156ee08c5ea6630381797cd4d1b7f8d29e6" exitCode=0 Dec 13 06:55:53 crc kubenswrapper[4707]: I1213 06:55:53.114300 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" event={"ID":"5aefe869-3e59-4e2c-a755-4a727d68d3aa","Type":"ContainerDied","Data":"1f8b8d0d7fbdf8cdb76348e8a3f80156ee08c5ea6630381797cd4d1b7f8d29e6"} Dec 13 06:55:53 crc kubenswrapper[4707]: I1213 06:55:53.145716 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" podStartSLOduration=38.145698312 podStartE2EDuration="38.145698312s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:55:53.137471126 +0000 UTC m=+61.634650173" watchObservedRunningTime="2025-12-13 06:55:53.145698312 +0000 UTC m=+61.642877359" Dec 13 06:55:53 crc kubenswrapper[4707]: I1213 06:55:53.152243 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:53 crc kubenswrapper[4707]: I1213 06:55:53.152350 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:55:54 crc kubenswrapper[4707]: I1213 06:55:54.744874 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:54 crc kubenswrapper[4707]: E1213 06:55:54.745298 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:54 crc kubenswrapper[4707]: I1213 06:55:54.745002 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:54 crc kubenswrapper[4707]: E1213 06:55:54.745387 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:55:54 crc kubenswrapper[4707]: I1213 06:55:54.744933 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:54 crc kubenswrapper[4707]: I1213 06:55:54.744994 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:54 crc kubenswrapper[4707]: E1213 06:55:54.745449 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:54 crc kubenswrapper[4707]: E1213 06:55:54.745630 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:55 crc kubenswrapper[4707]: I1213 06:55:55.123054 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" event={"ID":"5aefe869-3e59-4e2c-a755-4a727d68d3aa","Type":"ContainerStarted","Data":"a03a1da470ff0b22e59b747166bb4b6a4fa4e5a3a4facf1fbbae7dd1c54b4436"} Dec 13 06:55:55 crc kubenswrapper[4707]: I1213 06:55:55.151245 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rqdnb" podStartSLOduration=40.151227439 podStartE2EDuration="40.151227439s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:55:55.150599604 +0000 UTC m=+63.647778651" watchObservedRunningTime="2025-12-13 06:55:55.151227439 +0000 UTC m=+63.648406486" Dec 13 06:55:56 crc kubenswrapper[4707]: I1213 06:55:56.245427 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xbx52"] Dec 13 06:55:56 crc kubenswrapper[4707]: I1213 06:55:56.245554 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:56 crc kubenswrapper[4707]: E1213 06:55:56.245632 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:55:56 crc kubenswrapper[4707]: I1213 06:55:56.744213 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:56 crc kubenswrapper[4707]: E1213 06:55:56.744336 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:56 crc kubenswrapper[4707]: I1213 06:55:56.744561 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:56 crc kubenswrapper[4707]: E1213 06:55:56.744632 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:56 crc kubenswrapper[4707]: I1213 06:55:56.744760 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:56 crc kubenswrapper[4707]: E1213 06:55:56.744829 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:57 crc kubenswrapper[4707]: I1213 06:55:57.746849 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:57 crc kubenswrapper[4707]: E1213 06:55:57.747226 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:55:58 crc kubenswrapper[4707]: I1213 06:55:58.744026 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:55:58 crc kubenswrapper[4707]: E1213 06:55:58.744455 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:55:58 crc kubenswrapper[4707]: I1213 06:55:58.744650 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:55:58 crc kubenswrapper[4707]: E1213 06:55:58.744722 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:55:58 crc kubenswrapper[4707]: I1213 06:55:58.744859 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:55:58 crc kubenswrapper[4707]: E1213 06:55:58.744926 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:55:59 crc kubenswrapper[4707]: I1213 06:55:59.745012 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:55:59 crc kubenswrapper[4707]: E1213 06:55:59.745149 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xbx52" podUID="cbd5a4ca-21f2-414a-a4c4-441959130e09" Dec 13 06:56:00 crc kubenswrapper[4707]: I1213 06:56:00.744327 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:56:00 crc kubenswrapper[4707]: E1213 06:56:00.744467 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 13 06:56:00 crc kubenswrapper[4707]: I1213 06:56:00.744344 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:56:00 crc kubenswrapper[4707]: E1213 06:56:00.744556 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 13 06:56:00 crc kubenswrapper[4707]: I1213 06:56:00.744457 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:56:00 crc kubenswrapper[4707]: E1213 06:56:00.744631 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.152892 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs\") pod \"network-metrics-daemon-xbx52\" (UID: \"cbd5a4ca-21f2-414a-a4c4-441959130e09\") " pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:56:01 crc kubenswrapper[4707]: E1213 06:56:01.153074 4707 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 06:56:01 crc kubenswrapper[4707]: E1213 06:56:01.153143 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs podName:cbd5a4ca-21f2-414a-a4c4-441959130e09 nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.153124289 +0000 UTC m=+101.650303336 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs") pod "network-metrics-daemon-xbx52" (UID: "cbd5a4ca-21f2-414a-a4c4-441959130e09") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.241913 4707 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.242098 4707 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.277913 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52v2z"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.278383 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.280507 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.281082 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.286815 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.287020 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.287062 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.287142 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.287241 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.287532 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.293550 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.293573 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 06:56:01 crc kubenswrapper[4707]: W1213 06:56:01.297116 4707 reflector.go:561] object-"openshift-image-registry"/"trusted-ca": failed to list *v1.ConfigMap: configmaps "trusted-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Dec 13 06:56:01 crc kubenswrapper[4707]: E1213 06:56:01.297177 4707 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 13 06:56:01 crc kubenswrapper[4707]: W1213 06:56:01.302375 4707 reflector.go:561] object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx": failed to list *v1.Secret: secrets "cluster-image-registry-operator-dockercfg-m4qtx" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Dec 13 06:56:01 crc kubenswrapper[4707]: E1213 06:56:01.302420 4707 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-m4qtx\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cluster-image-registry-operator-dockercfg-m4qtx\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.302759 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 13 06:56:01 crc kubenswrapper[4707]: W1213 06:56:01.302795 4707 reflector.go:561] object-"openshift-image-registry"/"image-registry-operator-tls": failed to list *v1.Secret: secrets "image-registry-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-image-registry": no relationship found between node 'crc' and this object Dec 13 06:56:01 crc kubenswrapper[4707]: E1213 06:56:01.302833 4707 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"image-registry-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-image-registry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.303170 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.303498 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.303719 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.303880 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.303927 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.304408 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4cmd7"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.304858 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2hk9"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.304883 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4cmd7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.305391 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2hk9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.305613 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.307636 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-dfxc7"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.309000 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.309073 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cjjxz"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.309176 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.309415 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pwlxr"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.309754 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-6c4h6"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.310109 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dfxc7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.310136 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cjjxz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.310180 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.310500 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.311517 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.310842 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.320694 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.321109 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nlwlz"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.321339 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-w8kpl"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.321758 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.322212 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.322424 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.324989 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.325432 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.332056 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9hhm7"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.332820 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.337563 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.338017 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.338334 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.338395 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.338704 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.339257 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.339758 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.341928 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d6cfk"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.342465 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ckpvk"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.344677 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.345495 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.346390 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ckpvk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.360341 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.361680 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.362038 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.363521 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fp5b9"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.364025 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.368920 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c241862-435c-471a-a61d-31511d807d8e-config\") pod \"console-operator-58897d9998-4cmd7\" (UID: \"6c241862-435c-471a-a61d-31511d807d8e\") " pod="openshift-console-operator/console-operator-58897d9998-4cmd7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369038 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46t8r\" (UniqueName: \"kubernetes.io/projected/6c241862-435c-471a-a61d-31511d807d8e-kube-api-access-46t8r\") pod \"console-operator-58897d9998-4cmd7\" (UID: \"6c241862-435c-471a-a61d-31511d807d8e\") " pod="openshift-console-operator/console-operator-58897d9998-4cmd7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369068 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxbgj\" (UniqueName: \"kubernetes.io/projected/adc3663d-54be-4d1f-a40d-14575ee3f644-kube-api-access-vxbgj\") pod \"cluster-image-registry-operator-dc59b4c8b-l46tr\" (UID: \"adc3663d-54be-4d1f-a40d-14575ee3f644\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369096 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qr5gn\" (UID: \"92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369118 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/836ef8b4-4ea3-44d4-a07a-724ea1940c40-audit\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369136 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/836ef8b4-4ea3-44d4-a07a-724ea1940c40-etcd-serving-ca\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369157 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/44c42d40-af3b-4fba-b43c-042218dfd0ef-encryption-config\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369181 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ae296e-7f7a-4903-b425-729fba19cd77-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zw7fp\" (UID: \"d4ae296e-7f7a-4903-b425-729fba19cd77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369205 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba542d52-1a53-45d0-99c7-a56d028d0b42-serving-cert\") pod \"openshift-config-operator-7777fb866f-7ctgj\" (UID: \"ba542d52-1a53-45d0-99c7-a56d028d0b42\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369227 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369246 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369267 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0c9466a-24c0-47c7-b815-dbe77f75a826-serving-cert\") pod \"authentication-operator-69f744f599-9hhm7\" (UID: \"c0c9466a-24c0-47c7-b815-dbe77f75a826\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369288 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/44c42d40-af3b-4fba-b43c-042218dfd0ef-audit-policies\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369308 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369333 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b01406f-154d-4f81-8371-f998b19039cd-config\") pod \"route-controller-manager-6576b87f9c-5vgr9\" (UID: \"4b01406f-154d-4f81-8371-f998b19039cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369359 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/98c17acd-c234-4d92-855e-313cde4274cb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-d2hk9\" (UID: \"98c17acd-c234-4d92-855e-313cde4274cb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2hk9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369382 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c241862-435c-471a-a61d-31511d807d8e-trusted-ca\") pod \"console-operator-58897d9998-4cmd7\" (UID: \"6c241862-435c-471a-a61d-31511d807d8e\") " pod="openshift-console-operator/console-operator-58897d9998-4cmd7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369427 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0c9466a-24c0-47c7-b815-dbe77f75a826-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9hhm7\" (UID: \"c0c9466a-24c0-47c7-b815-dbe77f75a826\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369463 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-config\") pod \"controller-manager-879f6c89f-52v2z\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369489 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/44c42d40-af3b-4fba-b43c-042218dfd0ef-audit-dir\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369583 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369643 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qr5gn\" (UID: \"92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369758 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/836ef8b4-4ea3-44d4-a07a-724ea1940c40-etcd-client\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369786 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e437c323-4d06-4be9-be5c-3063c078a1e8-config\") pod \"kube-controller-manager-operator-78b949d7b-bg8hn\" (UID: \"e437c323-4d06-4be9-be5c-3063c078a1e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369931 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-console-serving-cert\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.369974 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-console-config\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.370049 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/836ef8b4-4ea3-44d4-a07a-724ea1940c40-trusted-ca-bundle\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.370123 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-oauth-serving-cert\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.370323 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-52v2z\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.370365 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ae296e-7f7a-4903-b425-729fba19cd77-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zw7fp\" (UID: \"d4ae296e-7f7a-4903-b425-729fba19cd77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.370447 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsxrk\" (UniqueName: \"kubernetes.io/projected/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-kube-api-access-bsxrk\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.370497 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c02287e-e126-49cd-94a2-5e0596607990-audit-dir\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.370538 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c241862-435c-471a-a61d-31511d807d8e-serving-cert\") pod \"console-operator-58897d9998-4cmd7\" (UID: \"6c241862-435c-471a-a61d-31511d807d8e\") " pod="openshift-console-operator/console-operator-58897d9998-4cmd7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.370565 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45t6z\" (UniqueName: \"kubernetes.io/projected/bb92b62b-9ab2-46d8-bd2b-cee2d1133778-kube-api-access-45t6z\") pod \"machine-api-operator-5694c8668f-pwlxr\" (UID: \"bb92b62b-9ab2-46d8-bd2b-cee2d1133778\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.370623 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv5pk\" (UniqueName: \"kubernetes.io/projected/44c42d40-af3b-4fba-b43c-042218dfd0ef-kube-api-access-qv5pk\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.370657 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z74h\" (UniqueName: \"kubernetes.io/projected/b1585433-7721-4bfc-ae6c-68d963b7f65b-kube-api-access-4z74h\") pod \"dns-operator-744455d44c-cjjxz\" (UID: \"b1585433-7721-4bfc-ae6c-68d963b7f65b\") " pod="openshift-dns-operator/dns-operator-744455d44c-cjjxz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.370780 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/836ef8b4-4ea3-44d4-a07a-724ea1940c40-config\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.370880 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m5lj\" (UniqueName: \"kubernetes.io/projected/d4ae296e-7f7a-4903-b425-729fba19cd77-kube-api-access-5m5lj\") pod \"openshift-controller-manager-operator-756b6f6bc6-zw7fp\" (UID: \"d4ae296e-7f7a-4903-b425-729fba19cd77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.370968 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdbr2\" (UniqueName: \"kubernetes.io/projected/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-kube-api-access-fdbr2\") pod \"controller-manager-879f6c89f-52v2z\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.371031 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb92b62b-9ab2-46d8-bd2b-cee2d1133778-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pwlxr\" (UID: \"bb92b62b-9ab2-46d8-bd2b-cee2d1133778\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.371109 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/adc3663d-54be-4d1f-a40d-14575ee3f644-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-l46tr\" (UID: \"adc3663d-54be-4d1f-a40d-14575ee3f644\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.371246 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.371412 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8ce01d-989d-463c-8973-63d7b9fc78da-config\") pod \"machine-approver-56656f9798-wrg48\" (UID: \"6f8ce01d-989d-463c-8973-63d7b9fc78da\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.371467 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb92b62b-9ab2-46d8-bd2b-cee2d1133778-config\") pod \"machine-api-operator-5694c8668f-pwlxr\" (UID: \"bb92b62b-9ab2-46d8-bd2b-cee2d1133778\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.371629 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.371721 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.371822 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfq5n\" (UniqueName: \"kubernetes.io/projected/12994b98-2a7b-461d-a7bb-9ef25135c20c-kube-api-access-gfq5n\") pod \"downloads-7954f5f757-dfxc7\" (UID: \"12994b98-2a7b-461d-a7bb-9ef25135c20c\") " pod="openshift-console/downloads-7954f5f757-dfxc7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.371920 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-client-ca\") pod \"controller-manager-879f6c89f-52v2z\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.371940 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6f8ce01d-989d-463c-8973-63d7b9fc78da-machine-approver-tls\") pod \"machine-approver-56656f9798-wrg48\" (UID: \"6f8ce01d-989d-463c-8973-63d7b9fc78da\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.374303 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-service-ca\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.384284 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e437c323-4d06-4be9-be5c-3063c078a1e8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bg8hn\" (UID: \"e437c323-4d06-4be9-be5c-3063c078a1e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.385402 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-trusted-ca-bundle\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.394325 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.394941 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0c9466a-24c0-47c7-b815-dbe77f75a826-config\") pod \"authentication-operator-69f744f599-9hhm7\" (UID: \"c0c9466a-24c0-47c7-b815-dbe77f75a826\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395041 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/836ef8b4-4ea3-44d4-a07a-724ea1940c40-node-pullsecrets\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395061 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/836ef8b4-4ea3-44d4-a07a-724ea1940c40-encryption-config\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395097 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/44c42d40-af3b-4fba-b43c-042218dfd0ef-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395121 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f948c\" (UniqueName: \"kubernetes.io/projected/92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa-kube-api-access-f948c\") pod \"openshift-apiserver-operator-796bbdcf4f-qr5gn\" (UID: \"92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395153 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395191 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0c9466a-24c0-47c7-b815-dbe77f75a826-service-ca-bundle\") pod \"authentication-operator-69f744f599-9hhm7\" (UID: \"c0c9466a-24c0-47c7-b815-dbe77f75a826\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395213 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-console-oauth-config\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395234 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ba542d52-1a53-45d0-99c7-a56d028d0b42-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7ctgj\" (UID: \"ba542d52-1a53-45d0-99c7-a56d028d0b42\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395262 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-audit-policies\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395276 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/836ef8b4-4ea3-44d4-a07a-724ea1940c40-serving-cert\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395319 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/adc3663d-54be-4d1f-a40d-14575ee3f644-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-l46tr\" (UID: \"adc3663d-54be-4d1f-a40d-14575ee3f644\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395343 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1585433-7721-4bfc-ae6c-68d963b7f65b-metrics-tls\") pod \"dns-operator-744455d44c-cjjxz\" (UID: \"b1585433-7721-4bfc-ae6c-68d963b7f65b\") " pod="openshift-dns-operator/dns-operator-744455d44c-cjjxz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395364 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/836ef8b4-4ea3-44d4-a07a-724ea1940c40-image-import-ca\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395399 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b01406f-154d-4f81-8371-f998b19039cd-serving-cert\") pod \"route-controller-manager-6576b87f9c-5vgr9\" (UID: \"4b01406f-154d-4f81-8371-f998b19039cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395473 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpgfd\" (UniqueName: \"kubernetes.io/projected/6f8ce01d-989d-463c-8973-63d7b9fc78da-kube-api-access-qpgfd\") pod \"machine-approver-56656f9798-wrg48\" (UID: \"6f8ce01d-989d-463c-8973-63d7b9fc78da\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395516 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395544 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b01406f-154d-4f81-8371-f998b19039cd-client-ca\") pod \"route-controller-manager-6576b87f9c-5vgr9\" (UID: \"4b01406f-154d-4f81-8371-f998b19039cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395579 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssnq6\" (UniqueName: \"kubernetes.io/projected/98c17acd-c234-4d92-855e-313cde4274cb-kube-api-access-ssnq6\") pod \"cluster-samples-operator-665b6dd947-d2hk9\" (UID: \"98c17acd-c234-4d92-855e-313cde4274cb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2hk9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395601 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8236301f-f078-4d9f-b77e-cc501c121d7f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dsdvs\" (UID: \"8236301f-f078-4d9f-b77e-cc501c121d7f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395652 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/44c42d40-af3b-4fba-b43c-042218dfd0ef-etcd-client\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395692 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/836ef8b4-4ea3-44d4-a07a-724ea1940c40-audit-dir\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395712 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8236301f-f078-4d9f-b77e-cc501c121d7f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dsdvs\" (UID: \"8236301f-f078-4d9f-b77e-cc501c121d7f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395765 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f8ce01d-989d-463c-8973-63d7b9fc78da-auth-proxy-config\") pod \"machine-approver-56656f9798-wrg48\" (UID: \"6f8ce01d-989d-463c-8973-63d7b9fc78da\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395786 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c42d40-af3b-4fba-b43c-042218dfd0ef-serving-cert\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395796 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395816 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q62b4\" (UniqueName: \"kubernetes.io/projected/ba542d52-1a53-45d0-99c7-a56d028d0b42-kube-api-access-q62b4\") pod \"openshift-config-operator-7777fb866f-7ctgj\" (UID: \"ba542d52-1a53-45d0-99c7-a56d028d0b42\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395839 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpnfd\" (UniqueName: \"kubernetes.io/projected/5c02287e-e126-49cd-94a2-5e0596607990-kube-api-access-mpnfd\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395858 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8236301f-f078-4d9f-b77e-cc501c121d7f-config\") pod \"kube-apiserver-operator-766d6c64bb-dsdvs\" (UID: \"8236301f-f078-4d9f-b77e-cc501c121d7f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395886 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395940 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5596\" (UniqueName: \"kubernetes.io/projected/c0c9466a-24c0-47c7-b815-dbe77f75a826-kube-api-access-g5596\") pod \"authentication-operator-69f744f599-9hhm7\" (UID: \"c0c9466a-24c0-47c7-b815-dbe77f75a826\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.395985 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bb92b62b-9ab2-46d8-bd2b-cee2d1133778-images\") pod \"machine-api-operator-5694c8668f-pwlxr\" (UID: \"bb92b62b-9ab2-46d8-bd2b-cee2d1133778\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.396002 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44c42d40-af3b-4fba-b43c-042218dfd0ef-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.396021 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/adc3663d-54be-4d1f-a40d-14575ee3f644-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-l46tr\" (UID: \"adc3663d-54be-4d1f-a40d-14575ee3f644\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.396037 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.396082 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-serving-cert\") pod \"controller-manager-879f6c89f-52v2z\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.396104 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-svb4r"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.396276 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.396321 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fp5b9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.396472 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-svb4r" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.396651 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.396732 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.396807 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.396107 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e437c323-4d06-4be9-be5c-3063c078a1e8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bg8hn\" (UID: \"e437c323-4d06-4be9-be5c-3063c078a1e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.396897 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgr2g\" (UniqueName: \"kubernetes.io/projected/836ef8b4-4ea3-44d4-a07a-724ea1940c40-kube-api-access-wgr2g\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.396917 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xl2h\" (UniqueName: \"kubernetes.io/projected/4b01406f-154d-4f81-8371-f998b19039cd-kube-api-access-5xl2h\") pod \"route-controller-manager-6576b87f9c-5vgr9\" (UID: \"4b01406f-154d-4f81-8371-f998b19039cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.396948 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.397120 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.397226 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.397403 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.397433 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.397517 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.397613 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.397636 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.397708 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.397908 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.398256 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.398378 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.398489 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.398601 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.398683 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.398753 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.399859 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.400099 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.400651 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.400800 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.400967 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.401057 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.401188 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.401324 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.401395 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.401417 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.401439 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.401517 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.401573 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.401592 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.401651 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.401724 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.401815 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.401854 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.401869 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.401936 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.402036 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.402064 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.402198 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.402317 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.402413 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.402505 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.402582 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.402664 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.402748 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.402828 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.402944 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.403069 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.405396 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.405575 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.410731 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.410909 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.397236 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.411071 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-995mf"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.411710 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.412188 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rnl8c"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.412505 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rnl8c" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.412811 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.413076 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.413519 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.413545 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.413638 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.413643 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.413708 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.413786 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.414225 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.417198 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.417522 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.417644 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.417667 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.417752 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.417870 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.417968 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.418060 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.418131 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.425073 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.425073 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.425791 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.425879 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.426168 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.426305 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.426974 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.427215 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.428034 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.437128 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.451263 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.465973 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.466647 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.467102 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.467374 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.467564 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.470027 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.470497 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mglrx"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.470832 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.472453 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.472832 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.475210 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.475324 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.477023 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.477561 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.477873 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.478432 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-d4qvp"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.478937 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.481299 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.485566 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.486246 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.486731 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.486785 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2tzdp"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.487519 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2tzdp" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.487753 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.491369 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52v2z"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.491575 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-wtlqx"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.492061 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.495048 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.495104 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.495149 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-c8dqr"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.495566 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-c8dqr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.497224 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cjjxz"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498513 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0c9466a-24c0-47c7-b815-dbe77f75a826-config\") pod \"authentication-operator-69f744f599-9hhm7\" (UID: \"c0c9466a-24c0-47c7-b815-dbe77f75a826\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498555 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/836ef8b4-4ea3-44d4-a07a-724ea1940c40-node-pullsecrets\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498582 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/836ef8b4-4ea3-44d4-a07a-724ea1940c40-encryption-config\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498610 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7-serving-cert\") pod \"service-ca-operator-777779d784-svb4r\" (UID: \"b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svb4r" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498635 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/44c42d40-af3b-4fba-b43c-042218dfd0ef-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498653 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f948c\" (UniqueName: \"kubernetes.io/projected/92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa-kube-api-access-f948c\") pod \"openshift-apiserver-operator-796bbdcf4f-qr5gn\" (UID: \"92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498676 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7-config\") pod \"service-ca-operator-777779d784-svb4r\" (UID: \"b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svb4r" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498698 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4b25c52d-5663-457b-ae69-0773516986e7-etcd-service-ca\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498736 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498760 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ba542d52-1a53-45d0-99c7-a56d028d0b42-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7ctgj\" (UID: \"ba542d52-1a53-45d0-99c7-a56d028d0b42\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498779 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-audit-policies\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498800 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/836ef8b4-4ea3-44d4-a07a-724ea1940c40-serving-cert\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498820 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19d61a46-8489-456e-8766-8a3fef9a5f44-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g7h57\" (UID: \"19d61a46-8489-456e-8766-8a3fef9a5f44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498841 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0c9466a-24c0-47c7-b815-dbe77f75a826-service-ca-bundle\") pod \"authentication-operator-69f744f599-9hhm7\" (UID: \"c0c9466a-24c0-47c7-b815-dbe77f75a826\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498862 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-console-oauth-config\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498884 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1585433-7721-4bfc-ae6c-68d963b7f65b-metrics-tls\") pod \"dns-operator-744455d44c-cjjxz\" (UID: \"b1585433-7721-4bfc-ae6c-68d963b7f65b\") " pod="openshift-dns-operator/dns-operator-744455d44c-cjjxz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498903 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/836ef8b4-4ea3-44d4-a07a-724ea1940c40-image-import-ca\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498922 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b01406f-154d-4f81-8371-f998b19039cd-serving-cert\") pod \"route-controller-manager-6576b87f9c-5vgr9\" (UID: \"4b01406f-154d-4f81-8371-f998b19039cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.498943 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b25c52d-5663-457b-ae69-0773516986e7-config\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499017 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4b25c52d-5663-457b-ae69-0773516986e7-etcd-ca\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499041 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/adc3663d-54be-4d1f-a40d-14575ee3f644-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-l46tr\" (UID: \"adc3663d-54be-4d1f-a40d-14575ee3f644\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499063 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w2x6\" (UniqueName: \"kubernetes.io/projected/b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7-kube-api-access-7w2x6\") pod \"service-ca-operator-777779d784-svb4r\" (UID: \"b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svb4r" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499086 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpgfd\" (UniqueName: \"kubernetes.io/projected/6f8ce01d-989d-463c-8973-63d7b9fc78da-kube-api-access-qpgfd\") pod \"machine-approver-56656f9798-wrg48\" (UID: \"6f8ce01d-989d-463c-8973-63d7b9fc78da\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499114 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8236301f-f078-4d9f-b77e-cc501c121d7f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dsdvs\" (UID: \"8236301f-f078-4d9f-b77e-cc501c121d7f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499138 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19d61a46-8489-456e-8766-8a3fef9a5f44-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g7h57\" (UID: \"19d61a46-8489-456e-8766-8a3fef9a5f44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499160 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499181 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b01406f-154d-4f81-8371-f998b19039cd-client-ca\") pod \"route-controller-manager-6576b87f9c-5vgr9\" (UID: \"4b01406f-154d-4f81-8371-f998b19039cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499208 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssnq6\" (UniqueName: \"kubernetes.io/projected/98c17acd-c234-4d92-855e-313cde4274cb-kube-api-access-ssnq6\") pod \"cluster-samples-operator-665b6dd947-d2hk9\" (UID: \"98c17acd-c234-4d92-855e-313cde4274cb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2hk9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499229 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/836ef8b4-4ea3-44d4-a07a-724ea1940c40-audit-dir\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499251 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8236301f-f078-4d9f-b77e-cc501c121d7f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dsdvs\" (UID: \"8236301f-f078-4d9f-b77e-cc501c121d7f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499270 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/44c42d40-af3b-4fba-b43c-042218dfd0ef-etcd-client\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499288 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c42d40-af3b-4fba-b43c-042218dfd0ef-serving-cert\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499308 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q62b4\" (UniqueName: \"kubernetes.io/projected/ba542d52-1a53-45d0-99c7-a56d028d0b42-kube-api-access-q62b4\") pod \"openshift-config-operator-7777fb866f-7ctgj\" (UID: \"ba542d52-1a53-45d0-99c7-a56d028d0b42\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499328 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpnfd\" (UniqueName: \"kubernetes.io/projected/5c02287e-e126-49cd-94a2-5e0596607990-kube-api-access-mpnfd\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499348 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8236301f-f078-4d9f-b77e-cc501c121d7f-config\") pod \"kube-apiserver-operator-766d6c64bb-dsdvs\" (UID: \"8236301f-f078-4d9f-b77e-cc501c121d7f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499378 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe022427-e3d6-4213-9d27-67555b2bce9f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rnl8c\" (UID: \"fe022427-e3d6-4213-9d27-67555b2bce9f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rnl8c" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499407 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f8ce01d-989d-463c-8973-63d7b9fc78da-auth-proxy-config\") pod \"machine-approver-56656f9798-wrg48\" (UID: \"6f8ce01d-989d-463c-8973-63d7b9fc78da\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499428 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499450 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rj6f\" (UniqueName: \"kubernetes.io/projected/fe022427-e3d6-4213-9d27-67555b2bce9f-kube-api-access-9rj6f\") pod \"control-plane-machine-set-operator-78cbb6b69f-rnl8c\" (UID: \"fe022427-e3d6-4213-9d27-67555b2bce9f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rnl8c" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499474 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bb92b62b-9ab2-46d8-bd2b-cee2d1133778-images\") pod \"machine-api-operator-5694c8668f-pwlxr\" (UID: \"bb92b62b-9ab2-46d8-bd2b-cee2d1133778\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499499 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44c42d40-af3b-4fba-b43c-042218dfd0ef-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499520 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/adc3663d-54be-4d1f-a40d-14575ee3f644-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-l46tr\" (UID: \"adc3663d-54be-4d1f-a40d-14575ee3f644\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499543 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499592 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5596\" (UniqueName: \"kubernetes.io/projected/c0c9466a-24c0-47c7-b815-dbe77f75a826-kube-api-access-g5596\") pod \"authentication-operator-69f744f599-9hhm7\" (UID: \"c0c9466a-24c0-47c7-b815-dbe77f75a826\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499625 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-serving-cert\") pod \"controller-manager-879f6c89f-52v2z\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499643 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e437c323-4d06-4be9-be5c-3063c078a1e8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bg8hn\" (UID: \"e437c323-4d06-4be9-be5c-3063c078a1e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499666 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgr2g\" (UniqueName: \"kubernetes.io/projected/836ef8b4-4ea3-44d4-a07a-724ea1940c40-kube-api-access-wgr2g\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499690 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xl2h\" (UniqueName: \"kubernetes.io/projected/4b01406f-154d-4f81-8371-f998b19039cd-kube-api-access-5xl2h\") pod \"route-controller-manager-6576b87f9c-5vgr9\" (UID: \"4b01406f-154d-4f81-8371-f998b19039cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499707 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46t8r\" (UniqueName: \"kubernetes.io/projected/6c241862-435c-471a-a61d-31511d807d8e-kube-api-access-46t8r\") pod \"console-operator-58897d9998-4cmd7\" (UID: \"6c241862-435c-471a-a61d-31511d807d8e\") " pod="openshift-console-operator/console-operator-58897d9998-4cmd7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499724 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxbgj\" (UniqueName: \"kubernetes.io/projected/adc3663d-54be-4d1f-a40d-14575ee3f644-kube-api-access-vxbgj\") pod \"cluster-image-registry-operator-dc59b4c8b-l46tr\" (UID: \"adc3663d-54be-4d1f-a40d-14575ee3f644\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499741 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qr5gn\" (UID: \"92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499759 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/836ef8b4-4ea3-44d4-a07a-724ea1940c40-audit\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499778 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/836ef8b4-4ea3-44d4-a07a-724ea1940c40-etcd-serving-ca\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499796 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c241862-435c-471a-a61d-31511d807d8e-config\") pod \"console-operator-58897d9998-4cmd7\" (UID: \"6c241862-435c-471a-a61d-31511d807d8e\") " pod="openshift-console-operator/console-operator-58897d9998-4cmd7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499813 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cc0d338c-f40c-45a1-9529-0eca9141065d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fjpzg\" (UID: \"cc0d338c-f40c-45a1-9529-0eca9141065d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499830 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/44c42d40-af3b-4fba-b43c-042218dfd0ef-encryption-config\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499848 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cc0d338c-f40c-45a1-9529-0eca9141065d-images\") pod \"machine-config-operator-74547568cd-fjpzg\" (UID: \"cc0d338c-f40c-45a1-9529-0eca9141065d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499865 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5fb8205-458e-4e43-a541-9438509a3fc4-config-volume\") pod \"collect-profiles-29426805-cpw2x\" (UID: \"e5fb8205-458e-4e43-a541-9438509a3fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499883 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ae296e-7f7a-4903-b425-729fba19cd77-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zw7fp\" (UID: \"d4ae296e-7f7a-4903-b425-729fba19cd77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499902 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499919 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499935 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba542d52-1a53-45d0-99c7-a56d028d0b42-serving-cert\") pod \"openshift-config-operator-7777fb866f-7ctgj\" (UID: \"ba542d52-1a53-45d0-99c7-a56d028d0b42\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499969 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0c9466a-24c0-47c7-b815-dbe77f75a826-serving-cert\") pod \"authentication-operator-69f744f599-9hhm7\" (UID: \"c0c9466a-24c0-47c7-b815-dbe77f75a826\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499988 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/44c42d40-af3b-4fba-b43c-042218dfd0ef-audit-policies\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500003 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500019 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b01406f-154d-4f81-8371-f998b19039cd-config\") pod \"route-controller-manager-6576b87f9c-5vgr9\" (UID: \"4b01406f-154d-4f81-8371-f998b19039cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500035 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/98c17acd-c234-4d92-855e-313cde4274cb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-d2hk9\" (UID: \"98c17acd-c234-4d92-855e-313cde4274cb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2hk9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500053 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f748007d-42fd-4882-9ead-51d2c65b5373-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fp5b9\" (UID: \"f748007d-42fd-4882-9ead-51d2c65b5373\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fp5b9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500069 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0c9466a-24c0-47c7-b815-dbe77f75a826-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9hhm7\" (UID: \"c0c9466a-24c0-47c7-b815-dbe77f75a826\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500085 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c241862-435c-471a-a61d-31511d807d8e-trusted-ca\") pod \"console-operator-58897d9998-4cmd7\" (UID: \"6c241862-435c-471a-a61d-31511d807d8e\") " pod="openshift-console-operator/console-operator-58897d9998-4cmd7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500101 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qr5gn\" (UID: \"92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500117 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/836ef8b4-4ea3-44d4-a07a-724ea1940c40-etcd-client\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500132 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-config\") pod \"controller-manager-879f6c89f-52v2z\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500147 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/44c42d40-af3b-4fba-b43c-042218dfd0ef-audit-dir\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500162 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500180 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e437c323-4d06-4be9-be5c-3063c078a1e8-config\") pod \"kube-controller-manager-operator-78b949d7b-bg8hn\" (UID: \"e437c323-4d06-4be9-be5c-3063c078a1e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500196 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-console-serving-cert\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500214 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-console-config\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500231 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/836ef8b4-4ea3-44d4-a07a-724ea1940c40-trusted-ca-bundle\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500246 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b25c52d-5663-457b-ae69-0773516986e7-etcd-client\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500272 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-oauth-serving-cert\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500288 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f87k4\" (UniqueName: \"kubernetes.io/projected/cc0d338c-f40c-45a1-9529-0eca9141065d-kube-api-access-f87k4\") pod \"machine-config-operator-74547568cd-fjpzg\" (UID: \"cc0d338c-f40c-45a1-9529-0eca9141065d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500305 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-52v2z\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500322 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ae296e-7f7a-4903-b425-729fba19cd77-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zw7fp\" (UID: \"d4ae296e-7f7a-4903-b425-729fba19cd77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500339 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b25c52d-5663-457b-ae69-0773516986e7-serving-cert\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500345 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0c9466a-24c0-47c7-b815-dbe77f75a826-config\") pod \"authentication-operator-69f744f599-9hhm7\" (UID: \"c0c9466a-24c0-47c7-b815-dbe77f75a826\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500372 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsxrk\" (UniqueName: \"kubernetes.io/projected/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-kube-api-access-bsxrk\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500388 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c02287e-e126-49cd-94a2-5e0596607990-audit-dir\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500405 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c241862-435c-471a-a61d-31511d807d8e-serving-cert\") pod \"console-operator-58897d9998-4cmd7\" (UID: \"6c241862-435c-471a-a61d-31511d807d8e\") " pod="openshift-console-operator/console-operator-58897d9998-4cmd7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500422 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45t6z\" (UniqueName: \"kubernetes.io/projected/bb92b62b-9ab2-46d8-bd2b-cee2d1133778-kube-api-access-45t6z\") pod \"machine-api-operator-5694c8668f-pwlxr\" (UID: \"bb92b62b-9ab2-46d8-bd2b-cee2d1133778\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500438 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv5pk\" (UniqueName: \"kubernetes.io/projected/44c42d40-af3b-4fba-b43c-042218dfd0ef-kube-api-access-qv5pk\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500455 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z74h\" (UniqueName: \"kubernetes.io/projected/b1585433-7721-4bfc-ae6c-68d963b7f65b-kube-api-access-4z74h\") pod \"dns-operator-744455d44c-cjjxz\" (UID: \"b1585433-7721-4bfc-ae6c-68d963b7f65b\") " pod="openshift-dns-operator/dns-operator-744455d44c-cjjxz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500471 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/836ef8b4-4ea3-44d4-a07a-724ea1940c40-config\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500487 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5lj\" (UniqueName: \"kubernetes.io/projected/d4ae296e-7f7a-4903-b425-729fba19cd77-kube-api-access-5m5lj\") pod \"openshift-controller-manager-operator-756b6f6bc6-zw7fp\" (UID: \"d4ae296e-7f7a-4903-b425-729fba19cd77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500505 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wszf\" (UniqueName: \"kubernetes.io/projected/4b25c52d-5663-457b-ae69-0773516986e7-kube-api-access-9wszf\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500522 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdbr2\" (UniqueName: \"kubernetes.io/projected/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-kube-api-access-fdbr2\") pod \"controller-manager-879f6c89f-52v2z\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500538 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb92b62b-9ab2-46d8-bd2b-cee2d1133778-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pwlxr\" (UID: \"bb92b62b-9ab2-46d8-bd2b-cee2d1133778\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500557 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500575 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19d61a46-8489-456e-8766-8a3fef9a5f44-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g7h57\" (UID: \"19d61a46-8489-456e-8766-8a3fef9a5f44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500592 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cc0d338c-f40c-45a1-9529-0eca9141065d-proxy-tls\") pod \"machine-config-operator-74547568cd-fjpzg\" (UID: \"cc0d338c-f40c-45a1-9529-0eca9141065d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500608 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5fb8205-458e-4e43-a541-9438509a3fc4-secret-volume\") pod \"collect-profiles-29426805-cpw2x\" (UID: \"e5fb8205-458e-4e43-a541-9438509a3fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500623 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vrp6\" (UniqueName: \"kubernetes.io/projected/e5fb8205-458e-4e43-a541-9438509a3fc4-kube-api-access-2vrp6\") pod \"collect-profiles-29426805-cpw2x\" (UID: \"e5fb8205-458e-4e43-a541-9438509a3fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500648 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/adc3663d-54be-4d1f-a40d-14575ee3f644-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-l46tr\" (UID: \"adc3663d-54be-4d1f-a40d-14575ee3f644\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500673 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8ce01d-989d-463c-8973-63d7b9fc78da-config\") pod \"machine-approver-56656f9798-wrg48\" (UID: \"6f8ce01d-989d-463c-8973-63d7b9fc78da\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500695 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb92b62b-9ab2-46d8-bd2b-cee2d1133778-config\") pod \"machine-api-operator-5694c8668f-pwlxr\" (UID: \"bb92b62b-9ab2-46d8-bd2b-cee2d1133778\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500717 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500736 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500755 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfq5n\" (UniqueName: \"kubernetes.io/projected/12994b98-2a7b-461d-a7bb-9ef25135c20c-kube-api-access-gfq5n\") pod \"downloads-7954f5f757-dfxc7\" (UID: \"12994b98-2a7b-461d-a7bb-9ef25135c20c\") " pod="openshift-console/downloads-7954f5f757-dfxc7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500775 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-service-ca\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500792 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-client-ca\") pod \"controller-manager-879f6c89f-52v2z\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500810 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6f8ce01d-989d-463c-8973-63d7b9fc78da-machine-approver-tls\") pod \"machine-approver-56656f9798-wrg48\" (UID: \"6f8ce01d-989d-463c-8973-63d7b9fc78da\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500828 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wksfc\" (UniqueName: \"kubernetes.io/projected/f748007d-42fd-4882-9ead-51d2c65b5373-kube-api-access-wksfc\") pod \"multus-admission-controller-857f4d67dd-fp5b9\" (UID: \"f748007d-42fd-4882-9ead-51d2c65b5373\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fp5b9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500847 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e437c323-4d06-4be9-be5c-3063c078a1e8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bg8hn\" (UID: \"e437c323-4d06-4be9-be5c-3063c078a1e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.500864 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-trusted-ca-bundle\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.501320 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/44c42d40-af3b-4fba-b43c-042218dfd0ef-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.506110 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-trusted-ca-bundle\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.506790 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/836ef8b4-4ea3-44d4-a07a-724ea1940c40-audit\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.507248 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/836ef8b4-4ea3-44d4-a07a-724ea1940c40-etcd-serving-ca\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.507843 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c241862-435c-471a-a61d-31511d807d8e-config\") pod \"console-operator-58897d9998-4cmd7\" (UID: \"6c241862-435c-471a-a61d-31511d807d8e\") " pod="openshift-console-operator/console-operator-58897d9998-4cmd7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.508574 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/836ef8b4-4ea3-44d4-a07a-724ea1940c40-node-pullsecrets\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.514544 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/836ef8b4-4ea3-44d4-a07a-724ea1940c40-encryption-config\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.516318 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c241862-435c-471a-a61d-31511d807d8e-trusted-ca\") pod \"console-operator-58897d9998-4cmd7\" (UID: \"6c241862-435c-471a-a61d-31511d807d8e\") " pod="openshift-console-operator/console-operator-58897d9998-4cmd7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.516846 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/44c42d40-af3b-4fba-b43c-042218dfd0ef-audit-policies\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.517505 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.517635 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/836ef8b4-4ea3-44d4-a07a-724ea1940c40-audit-dir\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.518648 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ae296e-7f7a-4903-b425-729fba19cd77-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zw7fp\" (UID: \"d4ae296e-7f7a-4903-b425-729fba19cd77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.518694 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0c9466a-24c0-47c7-b815-dbe77f75a826-serving-cert\") pod \"authentication-operator-69f744f599-9hhm7\" (UID: \"c0c9466a-24c0-47c7-b815-dbe77f75a826\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.519314 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.532779 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c02287e-e126-49cd-94a2-5e0596607990-audit-dir\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.533146 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bb92b62b-9ab2-46d8-bd2b-cee2d1133778-images\") pod \"machine-api-operator-5694c8668f-pwlxr\" (UID: \"bb92b62b-9ab2-46d8-bd2b-cee2d1133778\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.533497 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44c42d40-af3b-4fba-b43c-042218dfd0ef-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.534510 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.538673 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/44c42d40-af3b-4fba-b43c-042218dfd0ef-audit-dir\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.499641 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4cmd7"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.539460 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6c4h6"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.539478 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.539487 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fp5b9"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.539495 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.539505 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.539517 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.539526 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-995mf"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.539534 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2hk9"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.539542 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nlwlz"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.539550 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pwlxr"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.539658 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b01406f-154d-4f81-8371-f998b19039cd-config\") pod \"route-controller-manager-6576b87f9c-5vgr9\" (UID: \"4b01406f-154d-4f81-8371-f998b19039cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.549578 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ba542d52-1a53-45d0-99c7-a56d028d0b42-available-featuregates\") pod \"openshift-config-operator-7777fb866f-7ctgj\" (UID: \"ba542d52-1a53-45d0-99c7-a56d028d0b42\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.550621 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.551194 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c241862-435c-471a-a61d-31511d807d8e-serving-cert\") pod \"console-operator-58897d9998-4cmd7\" (UID: \"6c241862-435c-471a-a61d-31511d807d8e\") " pod="openshift-console-operator/console-operator-58897d9998-4cmd7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.552993 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e437c323-4d06-4be9-be5c-3063c078a1e8-config\") pod \"kube-controller-manager-operator-78b949d7b-bg8hn\" (UID: \"e437c323-4d06-4be9-be5c-3063c078a1e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.553416 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f8ce01d-989d-463c-8973-63d7b9fc78da-auth-proxy-config\") pod \"machine-approver-56656f9798-wrg48\" (UID: \"6f8ce01d-989d-463c-8973-63d7b9fc78da\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.553552 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.553665 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-serving-cert\") pod \"controller-manager-879f6c89f-52v2z\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.554105 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.554469 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.555141 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-kws2d"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.557025 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kws2d" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.557365 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/836ef8b4-4ea3-44d4-a07a-724ea1940c40-config\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.558714 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba542d52-1a53-45d0-99c7-a56d028d0b42-serving-cert\") pod \"openshift-config-operator-7777fb866f-7ctgj\" (UID: \"ba542d52-1a53-45d0-99c7-a56d028d0b42\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.561654 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.565240 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-service-ca\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.568094 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb92b62b-9ab2-46d8-bd2b-cee2d1133778-config\") pod \"machine-api-operator-5694c8668f-pwlxr\" (UID: \"bb92b62b-9ab2-46d8-bd2b-cee2d1133778\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.570797 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-console-config\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.573189 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.574286 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f8ce01d-989d-463c-8973-63d7b9fc78da-config\") pod \"machine-approver-56656f9798-wrg48\" (UID: \"6f8ce01d-989d-463c-8973-63d7b9fc78da\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.575056 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.577769 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/44c42d40-af3b-4fba-b43c-042218dfd0ef-encryption-config\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.578306 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ae296e-7f7a-4903-b425-729fba19cd77-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zw7fp\" (UID: \"d4ae296e-7f7a-4903-b425-729fba19cd77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.578710 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.578910 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-oauth-serving-cert\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.579143 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/44c42d40-af3b-4fba-b43c-042218dfd0ef-etcd-client\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.580225 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-config\") pod \"controller-manager-879f6c89f-52v2z\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.580826 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/836ef8b4-4ea3-44d4-a07a-724ea1940c40-trusted-ca-bundle\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.583228 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.583572 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.584634 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-audit-policies\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.585592 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.585626 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qr5gn\" (UID: \"92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.590609 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0c9466a-24c0-47c7-b815-dbe77f75a826-service-ca-bundle\") pod \"authentication-operator-69f744f599-9hhm7\" (UID: \"c0c9466a-24c0-47c7-b815-dbe77f75a826\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.591058 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c42d40-af3b-4fba-b43c-042218dfd0ef-serving-cert\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.591217 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-client-ca\") pod \"controller-manager-879f6c89f-52v2z\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.591561 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/836ef8b4-4ea3-44d4-a07a-724ea1940c40-etcd-client\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.592303 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/836ef8b4-4ea3-44d4-a07a-724ea1940c40-serving-cert\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.595001 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-52v2z\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.595012 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/836ef8b4-4ea3-44d4-a07a-724ea1940c40-image-import-ca\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.596757 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b01406f-154d-4f81-8371-f998b19039cd-client-ca\") pod \"route-controller-manager-6576b87f9c-5vgr9\" (UID: \"4b01406f-154d-4f81-8371-f998b19039cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.596964 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.599505 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.600361 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1585433-7721-4bfc-ae6c-68d963b7f65b-metrics-tls\") pod \"dns-operator-744455d44c-cjjxz\" (UID: \"b1585433-7721-4bfc-ae6c-68d963b7f65b\") " pod="openshift-dns-operator/dns-operator-744455d44c-cjjxz" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.600836 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qr5gn\" (UID: \"92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.601303 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dfxc7"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.602530 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb92b62b-9ab2-46d8-bd2b-cee2d1133778-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pwlxr\" (UID: \"bb92b62b-9ab2-46d8-bd2b-cee2d1133778\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.604655 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wszf\" (UniqueName: \"kubernetes.io/projected/4b25c52d-5663-457b-ae69-0773516986e7-kube-api-access-9wszf\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.604728 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19d61a46-8489-456e-8766-8a3fef9a5f44-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g7h57\" (UID: \"19d61a46-8489-456e-8766-8a3fef9a5f44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.604758 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cc0d338c-f40c-45a1-9529-0eca9141065d-proxy-tls\") pod \"machine-config-operator-74547568cd-fjpzg\" (UID: \"cc0d338c-f40c-45a1-9529-0eca9141065d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.604850 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5fb8205-458e-4e43-a541-9438509a3fc4-secret-volume\") pod \"collect-profiles-29426805-cpw2x\" (UID: \"e5fb8205-458e-4e43-a541-9438509a3fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.604879 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vrp6\" (UniqueName: \"kubernetes.io/projected/e5fb8205-458e-4e43-a541-9438509a3fc4-kube-api-access-2vrp6\") pod \"collect-profiles-29426805-cpw2x\" (UID: \"e5fb8205-458e-4e43-a541-9438509a3fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.604940 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wksfc\" (UniqueName: \"kubernetes.io/projected/f748007d-42fd-4882-9ead-51d2c65b5373-kube-api-access-wksfc\") pod \"multus-admission-controller-857f4d67dd-fp5b9\" (UID: \"f748007d-42fd-4882-9ead-51d2c65b5373\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fp5b9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.604988 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7-serving-cert\") pod \"service-ca-operator-777779d784-svb4r\" (UID: \"b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svb4r" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.605104 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7-config\") pod \"service-ca-operator-777779d784-svb4r\" (UID: \"b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svb4r" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.605160 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4b25c52d-5663-457b-ae69-0773516986e7-etcd-service-ca\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.605179 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0c9466a-24c0-47c7-b815-dbe77f75a826-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9hhm7\" (UID: \"c0c9466a-24c0-47c7-b815-dbe77f75a826\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.605210 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19d61a46-8489-456e-8766-8a3fef9a5f44-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g7h57\" (UID: \"19d61a46-8489-456e-8766-8a3fef9a5f44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.605256 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b25c52d-5663-457b-ae69-0773516986e7-config\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.605291 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4b25c52d-5663-457b-ae69-0773516986e7-etcd-ca\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.605346 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w2x6\" (UniqueName: \"kubernetes.io/projected/b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7-kube-api-access-7w2x6\") pod \"service-ca-operator-777779d784-svb4r\" (UID: \"b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svb4r" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.605523 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19d61a46-8489-456e-8766-8a3fef9a5f44-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g7h57\" (UID: \"19d61a46-8489-456e-8766-8a3fef9a5f44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.605618 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe022427-e3d6-4213-9d27-67555b2bce9f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rnl8c\" (UID: \"fe022427-e3d6-4213-9d27-67555b2bce9f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rnl8c" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.605746 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rj6f\" (UniqueName: \"kubernetes.io/projected/fe022427-e3d6-4213-9d27-67555b2bce9f-kube-api-access-9rj6f\") pod \"control-plane-machine-set-operator-78cbb6b69f-rnl8c\" (UID: \"fe022427-e3d6-4213-9d27-67555b2bce9f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rnl8c" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.605897 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cc0d338c-f40c-45a1-9529-0eca9141065d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fjpzg\" (UID: \"cc0d338c-f40c-45a1-9529-0eca9141065d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.605928 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cc0d338c-f40c-45a1-9529-0eca9141065d-images\") pod \"machine-config-operator-74547568cd-fjpzg\" (UID: \"cc0d338c-f40c-45a1-9529-0eca9141065d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.606053 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5fb8205-458e-4e43-a541-9438509a3fc4-config-volume\") pod \"collect-profiles-29426805-cpw2x\" (UID: \"e5fb8205-458e-4e43-a541-9438509a3fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.606108 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f748007d-42fd-4882-9ead-51d2c65b5373-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fp5b9\" (UID: \"f748007d-42fd-4882-9ead-51d2c65b5373\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fp5b9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.606277 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b25c52d-5663-457b-ae69-0773516986e7-etcd-client\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.606386 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f87k4\" (UniqueName: \"kubernetes.io/projected/cc0d338c-f40c-45a1-9529-0eca9141065d-kube-api-access-f87k4\") pod \"machine-config-operator-74547568cd-fjpzg\" (UID: \"cc0d338c-f40c-45a1-9529-0eca9141065d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.606560 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b25c52d-5663-457b-ae69-0773516986e7-serving-cert\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.609071 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ckpvk"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.609984 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e437c323-4d06-4be9-be5c-3063c078a1e8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bg8hn\" (UID: \"e437c323-4d06-4be9-be5c-3063c078a1e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.611391 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cc0d338c-f40c-45a1-9529-0eca9141065d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fjpzg\" (UID: \"cc0d338c-f40c-45a1-9529-0eca9141065d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.612942 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2j5l5"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.613328 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-console-oauth-config\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.614113 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-console-serving-cert\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.614328 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.614424 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.614447 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.615819 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rnl8c"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.616923 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/98c17acd-c234-4d92-855e-313cde4274cb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-d2hk9\" (UID: \"98c17acd-c234-4d92-855e-313cde4274cb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2hk9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.618199 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mglrx"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.620076 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.621031 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.621538 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b01406f-154d-4f81-8371-f998b19039cd-serving-cert\") pod \"route-controller-manager-6576b87f9c-5vgr9\" (UID: \"4b01406f-154d-4f81-8371-f998b19039cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.622797 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6f8ce01d-989d-463c-8973-63d7b9fc78da-machine-approver-tls\") pod \"machine-approver-56656f9798-wrg48\" (UID: \"6f8ce01d-989d-463c-8973-63d7b9fc78da\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.623214 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-w8kpl"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.624472 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8236301f-f078-4d9f-b77e-cc501c121d7f-config\") pod \"kube-apiserver-operator-766d6c64bb-dsdvs\" (UID: \"8236301f-f078-4d9f-b77e-cc501c121d7f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.626111 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d6cfk"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.626822 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.628650 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kws2d"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.630494 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-svb4r"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.632153 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9hhm7"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.633912 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.634547 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.637263 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-z72vj"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.638278 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z72vj" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.640435 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.642820 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.644194 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.645764 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.647260 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.648162 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.649405 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.650060 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8236301f-f078-4d9f-b77e-cc501c121d7f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dsdvs\" (UID: \"8236301f-f078-4d9f-b77e-cc501c121d7f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.650480 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2tzdp"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.651531 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.653001 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2j5l5"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.654118 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z72vj"] Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.660088 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.681180 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.700407 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.703223 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b25c52d-5663-457b-ae69-0773516986e7-config\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.720323 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.723230 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4b25c52d-5663-457b-ae69-0773516986e7-etcd-ca\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.741007 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.744083 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.753409 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b25c52d-5663-457b-ae69-0773516986e7-etcd-client\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.760801 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.770921 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4b25c52d-5663-457b-ae69-0773516986e7-etcd-service-ca\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.781368 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.790310 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b25c52d-5663-457b-ae69-0773516986e7-serving-cert\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.800179 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.820573 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.839835 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.859893 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.879981 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.900710 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.920827 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.940721 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.961090 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.981005 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 13 06:56:01 crc kubenswrapper[4707]: I1213 06:56:01.985630 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f748007d-42fd-4882-9ead-51d2c65b5373-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fp5b9\" (UID: \"f748007d-42fd-4882-9ead-51d2c65b5373\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fp5b9" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.000367 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.020867 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.040187 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.048516 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5fb8205-458e-4e43-a541-9438509a3fc4-config-volume\") pod \"collect-profiles-29426805-cpw2x\" (UID: \"e5fb8205-458e-4e43-a541-9438509a3fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.061007 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.081323 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.094184 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7-serving-cert\") pod \"service-ca-operator-777779d784-svb4r\" (UID: \"b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svb4r" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.101354 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.111627 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7-config\") pod \"service-ca-operator-777779d784-svb4r\" (UID: \"b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svb4r" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.120434 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.140985 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.161650 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.172226 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5fb8205-458e-4e43-a541-9438509a3fc4-secret-volume\") pod \"collect-profiles-29426805-cpw2x\" (UID: \"e5fb8205-458e-4e43-a541-9438509a3fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.180628 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.200282 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.221076 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.225570 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe022427-e3d6-4213-9d27-67555b2bce9f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rnl8c\" (UID: \"fe022427-e3d6-4213-9d27-67555b2bce9f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rnl8c" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.241542 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.260698 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.266340 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19d61a46-8489-456e-8766-8a3fef9a5f44-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g7h57\" (UID: \"19d61a46-8489-456e-8766-8a3fef9a5f44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.280430 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.300826 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.321160 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.340677 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.360978 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.366139 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19d61a46-8489-456e-8766-8a3fef9a5f44-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g7h57\" (UID: \"19d61a46-8489-456e-8766-8a3fef9a5f44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.380782 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.421474 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.425724 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cc0d338c-f40c-45a1-9529-0eca9141065d-images\") pod \"machine-config-operator-74547568cd-fjpzg\" (UID: \"cc0d338c-f40c-45a1-9529-0eca9141065d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.439387 4707 request.go:700] Waited for 1.012006279s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0 Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.441245 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.460233 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.481371 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.492049 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cc0d338c-f40c-45a1-9529-0eca9141065d-proxy-tls\") pod \"machine-config-operator-74547568cd-fjpzg\" (UID: \"cc0d338c-f40c-45a1-9529-0eca9141065d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.501055 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.543807 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.561413 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 13 06:56:02 crc kubenswrapper[4707]: E1213 06:56:02.574160 4707 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 13 06:56:02 crc kubenswrapper[4707]: E1213 06:56:02.574496 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adc3663d-54be-4d1f-a40d-14575ee3f644-trusted-ca podName:adc3663d-54be-4d1f-a40d-14575ee3f644 nodeName:}" failed. No retries permitted until 2025-12-13 06:56:03.074472159 +0000 UTC m=+71.571651206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/adc3663d-54be-4d1f-a40d-14575ee3f644-trusted-ca") pod "cluster-image-registry-operator-dc59b4c8b-l46tr" (UID: "adc3663d-54be-4d1f-a40d-14575ee3f644") : failed to sync configmap cache: timed out waiting for the condition Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.580410 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 13 06:56:02 crc kubenswrapper[4707]: E1213 06:56:02.587319 4707 secret.go:188] Couldn't get secret openshift-image-registry/image-registry-operator-tls: failed to sync secret cache: timed out waiting for the condition Dec 13 06:56:02 crc kubenswrapper[4707]: E1213 06:56:02.587576 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adc3663d-54be-4d1f-a40d-14575ee3f644-image-registry-operator-tls podName:adc3663d-54be-4d1f-a40d-14575ee3f644 nodeName:}" failed. No retries permitted until 2025-12-13 06:56:03.08753978 +0000 UTC m=+71.584718827 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/adc3663d-54be-4d1f-a40d-14575ee3f644-image-registry-operator-tls") pod "cluster-image-registry-operator-dc59b4c8b-l46tr" (UID: "adc3663d-54be-4d1f-a40d-14575ee3f644") : failed to sync secret cache: timed out waiting for the condition Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.600342 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.621011 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.640264 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.666812 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.680688 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.700591 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.720881 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.740058 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.743915 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.743983 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.744030 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.760238 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.781088 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.800378 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.823720 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.841082 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.860933 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.880614 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.900895 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.921124 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.941094 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.960253 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 13 06:56:02 crc kubenswrapper[4707]: I1213 06:56:02.980790 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.006747 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.021519 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.040222 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.060637 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.081203 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.100714 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.120795 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.127592 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/adc3663d-54be-4d1f-a40d-14575ee3f644-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-l46tr\" (UID: \"adc3663d-54be-4d1f-a40d-14575ee3f644\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.127882 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/adc3663d-54be-4d1f-a40d-14575ee3f644-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-l46tr\" (UID: \"adc3663d-54be-4d1f-a40d-14575ee3f644\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.141282 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.184890 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46t8r\" (UniqueName: \"kubernetes.io/projected/6c241862-435c-471a-a61d-31511d807d8e-kube-api-access-46t8r\") pod \"console-operator-58897d9998-4cmd7\" (UID: \"6c241862-435c-471a-a61d-31511d807d8e\") " pod="openshift-console-operator/console-operator-58897d9998-4cmd7" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.201302 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4cmd7" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.203066 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxbgj\" (UniqueName: \"kubernetes.io/projected/adc3663d-54be-4d1f-a40d-14575ee3f644-kube-api-access-vxbgj\") pod \"cluster-image-registry-operator-dc59b4c8b-l46tr\" (UID: \"adc3663d-54be-4d1f-a40d-14575ee3f644\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.220028 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f948c\" (UniqueName: \"kubernetes.io/projected/92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa-kube-api-access-f948c\") pod \"openshift-apiserver-operator-796bbdcf4f-qr5gn\" (UID: \"92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.236178 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8236301f-f078-4d9f-b77e-cc501c121d7f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dsdvs\" (UID: \"8236301f-f078-4d9f-b77e-cc501c121d7f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.254547 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsxrk\" (UniqueName: \"kubernetes.io/projected/c73c65c7-bef5-4db7-8e02-bbe2b39f1758-kube-api-access-bsxrk\") pod \"console-f9d7485db-6c4h6\" (UID: \"c73c65c7-bef5-4db7-8e02-bbe2b39f1758\") " pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.275529 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/adc3663d-54be-4d1f-a40d-14575ee3f644-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-l46tr\" (UID: \"adc3663d-54be-4d1f-a40d-14575ee3f644\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.303400 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5596\" (UniqueName: \"kubernetes.io/projected/c0c9466a-24c0-47c7-b815-dbe77f75a826-kube-api-access-g5596\") pod \"authentication-operator-69f744f599-9hhm7\" (UID: \"c0c9466a-24c0-47c7-b815-dbe77f75a826\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.315753 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e437c323-4d06-4be9-be5c-3063c078a1e8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bg8hn\" (UID: \"e437c323-4d06-4be9-be5c-3063c078a1e8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.339387 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgr2g\" (UniqueName: \"kubernetes.io/projected/836ef8b4-4ea3-44d4-a07a-724ea1940c40-kube-api-access-wgr2g\") pod \"apiserver-76f77b778f-w8kpl\" (UID: \"836ef8b4-4ea3-44d4-a07a-724ea1940c40\") " pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.358237 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xl2h\" (UniqueName: \"kubernetes.io/projected/4b01406f-154d-4f81-8371-f998b19039cd-kube-api-access-5xl2h\") pod \"route-controller-manager-6576b87f9c-5vgr9\" (UID: \"4b01406f-154d-4f81-8371-f998b19039cd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.374685 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q62b4\" (UniqueName: \"kubernetes.io/projected/ba542d52-1a53-45d0-99c7-a56d028d0b42-kube-api-access-q62b4\") pod \"openshift-config-operator-7777fb866f-7ctgj\" (UID: \"ba542d52-1a53-45d0-99c7-a56d028d0b42\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.396461 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpnfd\" (UniqueName: \"kubernetes.io/projected/5c02287e-e126-49cd-94a2-5e0596607990-kube-api-access-mpnfd\") pod \"oauth-openshift-558db77b4-nlwlz\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.397864 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4cmd7"] Dec 13 06:56:03 crc kubenswrapper[4707]: W1213 06:56:03.411147 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c241862_435c_471a_a61d_31511d807d8e.slice/crio-61e724a5df1288d9fee25f247bda4d38d631e71f54135c551b07845b9ad6d021 WatchSource:0}: Error finding container 61e724a5df1288d9fee25f247bda4d38d631e71f54135c551b07845b9ad6d021: Status 404 returned error can't find the container with id 61e724a5df1288d9fee25f247bda4d38d631e71f54135c551b07845b9ad6d021 Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.418792 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45t6z\" (UniqueName: \"kubernetes.io/projected/bb92b62b-9ab2-46d8-bd2b-cee2d1133778-kube-api-access-45t6z\") pod \"machine-api-operator-5694c8668f-pwlxr\" (UID: \"bb92b62b-9ab2-46d8-bd2b-cee2d1133778\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.423111 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.434093 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.437189 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv5pk\" (UniqueName: \"kubernetes.io/projected/44c42d40-af3b-4fba-b43c-042218dfd0ef-kube-api-access-qv5pk\") pod \"apiserver-7bbb656c7d-82vdl\" (UID: \"44c42d40-af3b-4fba-b43c-042218dfd0ef\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.444307 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.455690 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.458661 4707 request.go:700] Waited for 1.900958956s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0 Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.459312 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z74h\" (UniqueName: \"kubernetes.io/projected/b1585433-7721-4bfc-ae6c-68d963b7f65b-kube-api-access-4z74h\") pod \"dns-operator-744455d44c-cjjxz\" (UID: \"b1585433-7721-4bfc-ae6c-68d963b7f65b\") " pod="openshift-dns-operator/dns-operator-744455d44c-cjjxz" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.465308 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.470659 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.478595 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.480889 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.486634 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.499930 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.508331 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.511335 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.517115 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m5lj\" (UniqueName: \"kubernetes.io/projected/d4ae296e-7f7a-4903-b425-729fba19cd77-kube-api-access-5m5lj\") pod \"openshift-controller-manager-operator-756b6f6bc6-zw7fp\" (UID: \"d4ae296e-7f7a-4903-b425-729fba19cd77\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.540869 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdbr2\" (UniqueName: \"kubernetes.io/projected/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-kube-api-access-fdbr2\") pod \"controller-manager-879f6c89f-52v2z\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.552821 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.577070 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.585553 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfq5n\" (UniqueName: \"kubernetes.io/projected/12994b98-2a7b-461d-a7bb-9ef25135c20c-kube-api-access-gfq5n\") pod \"downloads-7954f5f757-dfxc7\" (UID: \"12994b98-2a7b-461d-a7bb-9ef25135c20c\") " pod="openshift-console/downloads-7954f5f757-dfxc7" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.603606 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpgfd\" (UniqueName: \"kubernetes.io/projected/6f8ce01d-989d-463c-8973-63d7b9fc78da-kube-api-access-qpgfd\") pod \"machine-approver-56656f9798-wrg48\" (UID: \"6f8ce01d-989d-463c-8973-63d7b9fc78da\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.612534 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dfxc7" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.634549 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssnq6\" (UniqueName: \"kubernetes.io/projected/98c17acd-c234-4d92-855e-313cde4274cb-kube-api-access-ssnq6\") pod \"cluster-samples-operator-665b6dd947-d2hk9\" (UID: \"98c17acd-c234-4d92-855e-313cde4274cb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2hk9" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.639377 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wszf\" (UniqueName: \"kubernetes.io/projected/4b25c52d-5663-457b-ae69-0773516986e7-kube-api-access-9wszf\") pod \"etcd-operator-b45778765-d6cfk\" (UID: \"4b25c52d-5663-457b-ae69-0773516986e7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.658672 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19d61a46-8489-456e-8766-8a3fef9a5f44-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-g7h57\" (UID: \"19d61a46-8489-456e-8766-8a3fef9a5f44\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.676208 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vrp6\" (UniqueName: \"kubernetes.io/projected/e5fb8205-458e-4e43-a541-9438509a3fc4-kube-api-access-2vrp6\") pod \"collect-profiles-29426805-cpw2x\" (UID: \"e5fb8205-458e-4e43-a541-9438509a3fc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.694690 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cjjxz" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.702602 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj"] Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.709626 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.716784 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rj6f\" (UniqueName: \"kubernetes.io/projected/fe022427-e3d6-4213-9d27-67555b2bce9f-kube-api-access-9rj6f\") pod \"control-plane-machine-set-operator-78cbb6b69f-rnl8c\" (UID: \"fe022427-e3d6-4213-9d27-67555b2bce9f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rnl8c" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.717083 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f87k4\" (UniqueName: \"kubernetes.io/projected/cc0d338c-f40c-45a1-9529-0eca9141065d-kube-api-access-f87k4\") pod \"machine-config-operator-74547568cd-fjpzg\" (UID: \"cc0d338c-f40c-45a1-9529-0eca9141065d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.742364 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.742756 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w2x6\" (UniqueName: \"kubernetes.io/projected/b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7-kube-api-access-7w2x6\") pod \"service-ca-operator-777779d784-svb4r\" (UID: \"b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svb4r" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.771360 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.775578 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.783447 4707 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.784308 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wksfc\" (UniqueName: \"kubernetes.io/projected/f748007d-42fd-4882-9ead-51d2c65b5373-kube-api-access-wksfc\") pod \"multus-admission-controller-857f4d67dd-fp5b9\" (UID: \"f748007d-42fd-4882-9ead-51d2c65b5373\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fp5b9" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.784739 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6c4h6"] Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.811736 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.814629 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.814770 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pwlxr"] Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.820543 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.872445 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-svb4r" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.873029 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rnl8c" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.872445 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.872497 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fp5b9" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.872610 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2hk9" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.874255 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.877424 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-w8kpl"] Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.879702 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.879903 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.880271 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.880359 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.901205 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.922991 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.940551 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 13 06:56:03 crc kubenswrapper[4707]: I1213 06:56:03.953357 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/adc3663d-54be-4d1f-a40d-14575ee3f644-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-l46tr\" (UID: \"adc3663d-54be-4d1f-a40d-14575ee3f644\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.001846 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.020970 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.041040 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.060499 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.080223 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.092807 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs"] Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.115866 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.117882 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nlwlz"] Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.119021 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/adc3663d-54be-4d1f-a40d-14575ee3f644-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-l46tr\" (UID: \"adc3663d-54be-4d1f-a40d-14575ee3f644\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.142209 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9hhm7"] Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.149413 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.149853 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.149890 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-registry-tls\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.149984 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-bound-sa-token\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.150050 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-trusted-ca\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.150074 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb44d620-0ac4-4b6d-a56c-48ef9c0b2548-profile-collector-cert\") pod \"catalog-operator-68c6474976-m8dll\" (UID: \"cb44d620-0ac4-4b6d-a56c-48ef9c0b2548\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.150850 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-registry-certificates\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.150984 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8mg6\" (UniqueName: \"kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-kube-api-access-w8mg6\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.151032 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.151058 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.151206 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb44d620-0ac4-4b6d-a56c-48ef9c0b2548-srv-cert\") pod \"catalog-operator-68c6474976-m8dll\" (UID: \"cb44d620-0ac4-4b6d-a56c-48ef9c0b2548\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll" Dec 13 06:56:04 crc kubenswrapper[4707]: E1213 06:56:04.153480 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:04.653457515 +0000 UTC m=+73.150636632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.167333 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4cmd7" event={"ID":"6c241862-435c-471a-a61d-31511d807d8e","Type":"ContainerStarted","Data":"61e724a5df1288d9fee25f247bda4d38d631e71f54135c551b07845b9ad6d021"} Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.172504 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6c4h6" event={"ID":"c73c65c7-bef5-4db7-8e02-bbe2b39f1758","Type":"ContainerStarted","Data":"1bfd76f06c6b569e429ef1cf649d2244685fdc4665266bfef6704c71711adb3d"} Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.173313 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" event={"ID":"ba542d52-1a53-45d0-99c7-a56d028d0b42","Type":"ContainerStarted","Data":"4eba82b4759fc976763cacee7a7934741a151df5d3c8e6754b2b9e0b656d9f3c"} Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.174055 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" event={"ID":"836ef8b4-4ea3-44d4-a07a-724ea1940c40","Type":"ContainerStarted","Data":"17b7483ada84eeddadf0ca6caf485c23b9b08365f43cd850fecb68989fd48454"} Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.174881 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" event={"ID":"bb92b62b-9ab2-46d8-bd2b-cee2d1133778","Type":"ContainerStarted","Data":"aa0704e890b73dc15f712ee28bae5aad88a308e91ac99d82e154ac6956c8fac9"} Dec 13 06:56:04 crc kubenswrapper[4707]: W1213 06:56:04.214270 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8236301f_f078_4d9f_b77e_cc501c121d7f.slice/crio-3dfc13626b133de4fe9225e681e82f7b6408118d86025c5832aa6049a1438931 WatchSource:0}: Error finding container 3dfc13626b133de4fe9225e681e82f7b6408118d86025c5832aa6049a1438931: Status 404 returned error can't find the container with id 3dfc13626b133de4fe9225e681e82f7b6408118d86025c5832aa6049a1438931 Dec 13 06:56:04 crc kubenswrapper[4707]: W1213 06:56:04.243187 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0c9466a_24c0_47c7_b815_dbe77f75a826.slice/crio-2055ef73875ba83d0de4d045556a055b061882469c0a417abad66483b2bd8299 WatchSource:0}: Error finding container 2055ef73875ba83d0de4d045556a055b061882469c0a417abad66483b2bd8299: Status 404 returned error can't find the container with id 2055ef73875ba83d0de4d045556a055b061882469c0a417abad66483b2bd8299 Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.251770 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.251929 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb44d620-0ac4-4b6d-a56c-48ef9c0b2548-srv-cert\") pod \"catalog-operator-68c6474976-m8dll\" (UID: \"cb44d620-0ac4-4b6d-a56c-48ef9c0b2548\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.252254 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcltb\" (UniqueName: \"kubernetes.io/projected/cb44d620-0ac4-4b6d-a56c-48ef9c0b2548-kube-api-access-lcltb\") pod \"catalog-operator-68c6474976-m8dll\" (UID: \"cb44d620-0ac4-4b6d-a56c-48ef9c0b2548\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.252318 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1864a49-e046-40ca-be97-b4e44acfdfa9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfptl\" (UID: \"d1864a49-e046-40ca-be97-b4e44acfdfa9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.252471 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.252543 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-registry-tls\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.252560 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bm79\" (UniqueName: \"kubernetes.io/projected/5440a9b6-173c-40ee-a4a3-965427ed9cc3-kube-api-access-4bm79\") pod \"migrator-59844c95c7-ckpvk\" (UID: \"5440a9b6-173c-40ee-a4a3-965427ed9cc3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ckpvk" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.252575 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-bound-sa-token\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.252599 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-trusted-ca\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.252616 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb44d620-0ac4-4b6d-a56c-48ef9c0b2548-profile-collector-cert\") pod \"catalog-operator-68c6474976-m8dll\" (UID: \"cb44d620-0ac4-4b6d-a56c-48ef9c0b2548\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.252643 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnn4x\" (UniqueName: \"kubernetes.io/projected/d1864a49-e046-40ca-be97-b4e44acfdfa9-kube-api-access-jnn4x\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfptl\" (UID: \"d1864a49-e046-40ca-be97-b4e44acfdfa9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.252669 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-registry-certificates\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.252700 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8mg6\" (UniqueName: \"kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-kube-api-access-w8mg6\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.252731 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.252834 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1864a49-e046-40ca-be97-b4e44acfdfa9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfptl\" (UID: \"d1864a49-e046-40ca-be97-b4e44acfdfa9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl" Dec 13 06:56:04 crc kubenswrapper[4707]: E1213 06:56:04.259147 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:04.759116402 +0000 UTC m=+73.256295449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.260847 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.261016 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-registry-certificates\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.261135 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.262856 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-trusted-ca\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.265759 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-registry-tls\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.269752 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cb44d620-0ac4-4b6d-a56c-48ef9c0b2548-srv-cert\") pod \"catalog-operator-68c6474976-m8dll\" (UID: \"cb44d620-0ac4-4b6d-a56c-48ef9c0b2548\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.280167 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cb44d620-0ac4-4b6d-a56c-48ef9c0b2548-profile-collector-cert\") pod \"catalog-operator-68c6474976-m8dll\" (UID: \"cb44d620-0ac4-4b6d-a56c-48ef9c0b2548\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.301700 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8mg6\" (UniqueName: \"kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-kube-api-access-w8mg6\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.324565 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-bound-sa-token\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.353827 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8dc0e19a-9d79-4aa0-a483-f03e6df544f3-tmpfs\") pod \"packageserver-d55dfcdfc-hpx2l\" (UID: \"8dc0e19a-9d79-4aa0-a483-f03e6df544f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.353881 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1864a49-e046-40ca-be97-b4e44acfdfa9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfptl\" (UID: \"d1864a49-e046-40ca-be97-b4e44acfdfa9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.353906 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc5x6\" (UniqueName: \"kubernetes.io/projected/8dc0e19a-9d79-4aa0-a483-f03e6df544f3-kube-api-access-gc5x6\") pod \"packageserver-d55dfcdfc-hpx2l\" (UID: \"8dc0e19a-9d79-4aa0-a483-f03e6df544f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.353932 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/11795d95-2c15-4a44-9276-6786ec1e5cc1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mglrx\" (UID: \"11795d95-2c15-4a44-9276-6786ec1e5cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354020 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/278a952c-12a9-469b-960b-d68b5f2d2f29-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-wtlqx\" (UID: \"278a952c-12a9-469b-960b-d68b5f2d2f29\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354044 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbfwv\" (UniqueName: \"kubernetes.io/projected/04a68c01-2285-41f7-830f-896a194c343d-kube-api-access-kbfwv\") pod \"machine-config-controller-84d6567774-hvcjw\" (UID: \"04a68c01-2285-41f7-830f-896a194c343d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354079 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wfm5\" (UniqueName: \"kubernetes.io/projected/6b1fa22e-132c-42d6-9327-f6d826d92373-kube-api-access-8wfm5\") pod \"router-default-5444994796-d4qvp\" (UID: \"6b1fa22e-132c-42d6-9327-f6d826d92373\") " pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354100 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42c9b6ce-3059-4c6f-b78a-19f12b2ef3c0-cert\") pod \"ingress-canary-z72vj\" (UID: \"42c9b6ce-3059-4c6f-b78a-19f12b2ef3c0\") " pod="openshift-ingress-canary/ingress-canary-z72vj" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354134 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-575dn\" (UniqueName: \"kubernetes.io/projected/93186f0d-0a67-42db-b750-a02749ab9f9a-kube-api-access-575dn\") pod \"dns-default-kws2d\" (UID: \"93186f0d-0a67-42db-b750-a02749ab9f9a\") " pod="openshift-dns/dns-default-kws2d" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354167 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/910e37ce-87c8-4083-a021-e91516eaa546-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d7vq8\" (UID: \"910e37ce-87c8-4083-a021-e91516eaa546\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354205 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b383ce4b-acbd-4331-9244-27387ddc224a-mountpoint-dir\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354220 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/93186f0d-0a67-42db-b750-a02749ab9f9a-metrics-tls\") pod \"dns-default-kws2d\" (UID: \"93186f0d-0a67-42db-b750-a02749ab9f9a\") " pod="openshift-dns/dns-default-kws2d" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354237 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bm79\" (UniqueName: \"kubernetes.io/projected/5440a9b6-173c-40ee-a4a3-965427ed9cc3-kube-api-access-4bm79\") pod \"migrator-59844c95c7-ckpvk\" (UID: \"5440a9b6-173c-40ee-a4a3-965427ed9cc3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ckpvk" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354282 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/278a952c-12a9-469b-960b-d68b5f2d2f29-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-wtlqx\" (UID: \"278a952c-12a9-469b-960b-d68b5f2d2f29\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354318 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnn4x\" (UniqueName: \"kubernetes.io/projected/d1864a49-e046-40ca-be97-b4e44acfdfa9-kube-api-access-jnn4x\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfptl\" (UID: \"d1864a49-e046-40ca-be97-b4e44acfdfa9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354337 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8dc0e19a-9d79-4aa0-a483-f03e6df544f3-apiservice-cert\") pod \"packageserver-d55dfcdfc-hpx2l\" (UID: \"8dc0e19a-9d79-4aa0-a483-f03e6df544f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354353 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/278a952c-12a9-469b-960b-d68b5f2d2f29-ready\") pod \"cni-sysctl-allowlist-ds-wtlqx\" (UID: \"278a952c-12a9-469b-960b-d68b5f2d2f29\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354384 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b383ce4b-acbd-4331-9244-27387ddc224a-csi-data-dir\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354435 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h84wn\" (UniqueName: \"kubernetes.io/projected/11795d95-2c15-4a44-9276-6786ec1e5cc1-kube-api-access-h84wn\") pod \"marketplace-operator-79b997595-mglrx\" (UID: \"11795d95-2c15-4a44-9276-6786ec1e5cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354454 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5snj2\" (UniqueName: \"kubernetes.io/projected/910e37ce-87c8-4083-a021-e91516eaa546-kube-api-access-5snj2\") pod \"package-server-manager-789f6589d5-d7vq8\" (UID: \"910e37ce-87c8-4083-a021-e91516eaa546\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354498 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6b1fa22e-132c-42d6-9327-f6d826d92373-stats-auth\") pod \"router-default-5444994796-d4qvp\" (UID: \"6b1fa22e-132c-42d6-9327-f6d826d92373\") " pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354519 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dblfl\" (UniqueName: \"kubernetes.io/projected/b383ce4b-acbd-4331-9244-27387ddc224a-kube-api-access-dblfl\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354539 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/93832597-9fae-41f4-8228-f3bf3e2ecd88-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lbgmc\" (UID: \"93832597-9fae-41f4-8228-f3bf3e2ecd88\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354572 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/93832597-9fae-41f4-8228-f3bf3e2ecd88-metrics-tls\") pod \"ingress-operator-5b745b69d9-lbgmc\" (UID: \"93832597-9fae-41f4-8228-f3bf3e2ecd88\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354593 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msx96\" (UniqueName: \"kubernetes.io/projected/9ae20fe7-4e86-4f78-a23c-1bd355da288d-kube-api-access-msx96\") pod \"olm-operator-6b444d44fb-lf7l6\" (UID: \"9ae20fe7-4e86-4f78-a23c-1bd355da288d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354624 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps9mp\" (UniqueName: \"kubernetes.io/projected/93832597-9fae-41f4-8228-f3bf3e2ecd88-kube-api-access-ps9mp\") pod \"ingress-operator-5b745b69d9-lbgmc\" (UID: \"93832597-9fae-41f4-8228-f3bf3e2ecd88\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354644 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b1fa22e-132c-42d6-9327-f6d826d92373-service-ca-bundle\") pod \"router-default-5444994796-d4qvp\" (UID: \"6b1fa22e-132c-42d6-9327-f6d826d92373\") " pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354690 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1fa22e-132c-42d6-9327-f6d826d92373-metrics-certs\") pod \"router-default-5444994796-d4qvp\" (UID: \"6b1fa22e-132c-42d6-9327-f6d826d92373\") " pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354729 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354854 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmhxx\" (UniqueName: \"kubernetes.io/projected/befe42e4-2c02-458c-8465-d0190290e122-kube-api-access-hmhxx\") pod \"machine-config-server-c8dqr\" (UID: \"befe42e4-2c02-458c-8465-d0190290e122\") " pod="openshift-machine-config-operator/machine-config-server-c8dqr" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354876 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8dc0e19a-9d79-4aa0-a483-f03e6df544f3-webhook-cert\") pod \"packageserver-d55dfcdfc-hpx2l\" (UID: \"8dc0e19a-9d79-4aa0-a483-f03e6df544f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354912 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b383ce4b-acbd-4331-9244-27387ddc224a-plugins-dir\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354933 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlnxj\" (UniqueName: \"kubernetes.io/projected/42c9b6ce-3059-4c6f-b78a-19f12b2ef3c0-kube-api-access-dlnxj\") pod \"ingress-canary-z72vj\" (UID: \"42c9b6ce-3059-4c6f-b78a-19f12b2ef3c0\") " pod="openshift-ingress-canary/ingress-canary-z72vj" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.354983 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1864a49-e046-40ca-be97-b4e44acfdfa9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfptl\" (UID: \"d1864a49-e046-40ca-be97-b4e44acfdfa9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355007 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/04a68c01-2285-41f7-830f-896a194c343d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hvcjw\" (UID: \"04a68c01-2285-41f7-830f-896a194c343d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355056 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b383ce4b-acbd-4331-9244-27387ddc224a-socket-dir\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355104 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93832597-9fae-41f4-8228-f3bf3e2ecd88-trusted-ca\") pod \"ingress-operator-5b745b69d9-lbgmc\" (UID: \"93832597-9fae-41f4-8228-f3bf3e2ecd88\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355155 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b383ce4b-acbd-4331-9244-27387ddc224a-registration-dir\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355192 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e34b1f8d-2d33-44a6-830b-9d2c9840117c-signing-cabundle\") pod \"service-ca-9c57cc56f-2tzdp\" (UID: \"e34b1f8d-2d33-44a6-830b-9d2c9840117c\") " pod="openshift-service-ca/service-ca-9c57cc56f-2tzdp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355209 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/befe42e4-2c02-458c-8465-d0190290e122-node-bootstrap-token\") pod \"machine-config-server-c8dqr\" (UID: \"befe42e4-2c02-458c-8465-d0190290e122\") " pod="openshift-machine-config-operator/machine-config-server-c8dqr" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355224 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ae20fe7-4e86-4f78-a23c-1bd355da288d-srv-cert\") pod \"olm-operator-6b444d44fb-lf7l6\" (UID: \"9ae20fe7-4e86-4f78-a23c-1bd355da288d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355242 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ae20fe7-4e86-4f78-a23c-1bd355da288d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lf7l6\" (UID: \"9ae20fe7-4e86-4f78-a23c-1bd355da288d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355257 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93186f0d-0a67-42db-b750-a02749ab9f9a-config-volume\") pod \"dns-default-kws2d\" (UID: \"93186f0d-0a67-42db-b750-a02749ab9f9a\") " pod="openshift-dns/dns-default-kws2d" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355272 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/befe42e4-2c02-458c-8465-d0190290e122-certs\") pod \"machine-config-server-c8dqr\" (UID: \"befe42e4-2c02-458c-8465-d0190290e122\") " pod="openshift-machine-config-operator/machine-config-server-c8dqr" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355288 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/04a68c01-2285-41f7-830f-896a194c343d-proxy-tls\") pod \"machine-config-controller-84d6567774-hvcjw\" (UID: \"04a68c01-2285-41f7-830f-896a194c343d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355303 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/11795d95-2c15-4a44-9276-6786ec1e5cc1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mglrx\" (UID: \"11795d95-2c15-4a44-9276-6786ec1e5cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355333 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcltb\" (UniqueName: \"kubernetes.io/projected/cb44d620-0ac4-4b6d-a56c-48ef9c0b2548-kube-api-access-lcltb\") pod \"catalog-operator-68c6474976-m8dll\" (UID: \"cb44d620-0ac4-4b6d-a56c-48ef9c0b2548\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355349 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6b1fa22e-132c-42d6-9327-f6d826d92373-default-certificate\") pod \"router-default-5444994796-d4qvp\" (UID: \"6b1fa22e-132c-42d6-9327-f6d826d92373\") " pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355377 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75vfk\" (UniqueName: \"kubernetes.io/projected/278a952c-12a9-469b-960b-d68b5f2d2f29-kube-api-access-75vfk\") pod \"cni-sysctl-allowlist-ds-wtlqx\" (UID: \"278a952c-12a9-469b-960b-d68b5f2d2f29\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355401 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e34b1f8d-2d33-44a6-830b-9d2c9840117c-signing-key\") pod \"service-ca-9c57cc56f-2tzdp\" (UID: \"e34b1f8d-2d33-44a6-830b-9d2c9840117c\") " pod="openshift-service-ca/service-ca-9c57cc56f-2tzdp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.355421 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnfkd\" (UniqueName: \"kubernetes.io/projected/e34b1f8d-2d33-44a6-830b-9d2c9840117c-kube-api-access-hnfkd\") pod \"service-ca-9c57cc56f-2tzdp\" (UID: \"e34b1f8d-2d33-44a6-830b-9d2c9840117c\") " pod="openshift-service-ca/service-ca-9c57cc56f-2tzdp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.356390 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1864a49-e046-40ca-be97-b4e44acfdfa9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfptl\" (UID: \"d1864a49-e046-40ca-be97-b4e44acfdfa9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl" Dec 13 06:56:04 crc kubenswrapper[4707]: E1213 06:56:04.359869 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:04.859854632 +0000 UTC m=+73.357033679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.402011 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1864a49-e046-40ca-be97-b4e44acfdfa9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfptl\" (UID: \"d1864a49-e046-40ca-be97-b4e44acfdfa9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.426929 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bm79\" (UniqueName: \"kubernetes.io/projected/5440a9b6-173c-40ee-a4a3-965427ed9cc3-kube-api-access-4bm79\") pod \"migrator-59844c95c7-ckpvk\" (UID: \"5440a9b6-173c-40ee-a4a3-965427ed9cc3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ckpvk" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.440532 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnn4x\" (UniqueName: \"kubernetes.io/projected/d1864a49-e046-40ca-be97-b4e44acfdfa9-kube-api-access-jnn4x\") pod \"kube-storage-version-migrator-operator-b67b599dd-gfptl\" (UID: \"d1864a49-e046-40ca-be97-b4e44acfdfa9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.455780 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456005 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b383ce4b-acbd-4331-9244-27387ddc224a-mountpoint-dir\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456032 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/93186f0d-0a67-42db-b750-a02749ab9f9a-metrics-tls\") pod \"dns-default-kws2d\" (UID: \"93186f0d-0a67-42db-b750-a02749ab9f9a\") " pod="openshift-dns/dns-default-kws2d" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456056 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/278a952c-12a9-469b-960b-d68b5f2d2f29-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-wtlqx\" (UID: \"278a952c-12a9-469b-960b-d68b5f2d2f29\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456073 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8dc0e19a-9d79-4aa0-a483-f03e6df544f3-apiservice-cert\") pod \"packageserver-d55dfcdfc-hpx2l\" (UID: \"8dc0e19a-9d79-4aa0-a483-f03e6df544f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456087 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/278a952c-12a9-469b-960b-d68b5f2d2f29-ready\") pod \"cni-sysctl-allowlist-ds-wtlqx\" (UID: \"278a952c-12a9-469b-960b-d68b5f2d2f29\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456104 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b383ce4b-acbd-4331-9244-27387ddc224a-csi-data-dir\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456141 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h84wn\" (UniqueName: \"kubernetes.io/projected/11795d95-2c15-4a44-9276-6786ec1e5cc1-kube-api-access-h84wn\") pod \"marketplace-operator-79b997595-mglrx\" (UID: \"11795d95-2c15-4a44-9276-6786ec1e5cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456162 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5snj2\" (UniqueName: \"kubernetes.io/projected/910e37ce-87c8-4083-a021-e91516eaa546-kube-api-access-5snj2\") pod \"package-server-manager-789f6589d5-d7vq8\" (UID: \"910e37ce-87c8-4083-a021-e91516eaa546\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456195 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6b1fa22e-132c-42d6-9327-f6d826d92373-stats-auth\") pod \"router-default-5444994796-d4qvp\" (UID: \"6b1fa22e-132c-42d6-9327-f6d826d92373\") " pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456213 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dblfl\" (UniqueName: \"kubernetes.io/projected/b383ce4b-acbd-4331-9244-27387ddc224a-kube-api-access-dblfl\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456233 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/93832597-9fae-41f4-8228-f3bf3e2ecd88-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lbgmc\" (UID: \"93832597-9fae-41f4-8228-f3bf3e2ecd88\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456251 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msx96\" (UniqueName: \"kubernetes.io/projected/9ae20fe7-4e86-4f78-a23c-1bd355da288d-kube-api-access-msx96\") pod \"olm-operator-6b444d44fb-lf7l6\" (UID: \"9ae20fe7-4e86-4f78-a23c-1bd355da288d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456265 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/93832597-9fae-41f4-8228-f3bf3e2ecd88-metrics-tls\") pod \"ingress-operator-5b745b69d9-lbgmc\" (UID: \"93832597-9fae-41f4-8228-f3bf3e2ecd88\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456289 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps9mp\" (UniqueName: \"kubernetes.io/projected/93832597-9fae-41f4-8228-f3bf3e2ecd88-kube-api-access-ps9mp\") pod \"ingress-operator-5b745b69d9-lbgmc\" (UID: \"93832597-9fae-41f4-8228-f3bf3e2ecd88\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456315 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b1fa22e-132c-42d6-9327-f6d826d92373-service-ca-bundle\") pod \"router-default-5444994796-d4qvp\" (UID: \"6b1fa22e-132c-42d6-9327-f6d826d92373\") " pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456343 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1fa22e-132c-42d6-9327-f6d826d92373-metrics-certs\") pod \"router-default-5444994796-d4qvp\" (UID: \"6b1fa22e-132c-42d6-9327-f6d826d92373\") " pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456383 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8dc0e19a-9d79-4aa0-a483-f03e6df544f3-webhook-cert\") pod \"packageserver-d55dfcdfc-hpx2l\" (UID: \"8dc0e19a-9d79-4aa0-a483-f03e6df544f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" Dec 13 06:56:04 crc kubenswrapper[4707]: E1213 06:56:04.456823 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:04.956806461 +0000 UTC m=+73.453985508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.457619 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmhxx\" (UniqueName: \"kubernetes.io/projected/befe42e4-2c02-458c-8465-d0190290e122-kube-api-access-hmhxx\") pod \"machine-config-server-c8dqr\" (UID: \"befe42e4-2c02-458c-8465-d0190290e122\") " pod="openshift-machine-config-operator/machine-config-server-c8dqr" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.457650 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b383ce4b-acbd-4331-9244-27387ddc224a-plugins-dir\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.457667 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlnxj\" (UniqueName: \"kubernetes.io/projected/42c9b6ce-3059-4c6f-b78a-19f12b2ef3c0-kube-api-access-dlnxj\") pod \"ingress-canary-z72vj\" (UID: \"42c9b6ce-3059-4c6f-b78a-19f12b2ef3c0\") " pod="openshift-ingress-canary/ingress-canary-z72vj" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.457694 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/04a68c01-2285-41f7-830f-896a194c343d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hvcjw\" (UID: \"04a68c01-2285-41f7-830f-896a194c343d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.457744 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b383ce4b-acbd-4331-9244-27387ddc224a-socket-dir\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.457762 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93832597-9fae-41f4-8228-f3bf3e2ecd88-trusted-ca\") pod \"ingress-operator-5b745b69d9-lbgmc\" (UID: \"93832597-9fae-41f4-8228-f3bf3e2ecd88\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.457782 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b383ce4b-acbd-4331-9244-27387ddc224a-registration-dir\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.457812 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e34b1f8d-2d33-44a6-830b-9d2c9840117c-signing-cabundle\") pod \"service-ca-9c57cc56f-2tzdp\" (UID: \"e34b1f8d-2d33-44a6-830b-9d2c9840117c\") " pod="openshift-service-ca/service-ca-9c57cc56f-2tzdp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.457827 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93186f0d-0a67-42db-b750-a02749ab9f9a-config-volume\") pod \"dns-default-kws2d\" (UID: \"93186f0d-0a67-42db-b750-a02749ab9f9a\") " pod="openshift-dns/dns-default-kws2d" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.457854 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/befe42e4-2c02-458c-8465-d0190290e122-node-bootstrap-token\") pod \"machine-config-server-c8dqr\" (UID: \"befe42e4-2c02-458c-8465-d0190290e122\") " pod="openshift-machine-config-operator/machine-config-server-c8dqr" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.457870 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ae20fe7-4e86-4f78-a23c-1bd355da288d-srv-cert\") pod \"olm-operator-6b444d44fb-lf7l6\" (UID: \"9ae20fe7-4e86-4f78-a23c-1bd355da288d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.457887 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ae20fe7-4e86-4f78-a23c-1bd355da288d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lf7l6\" (UID: \"9ae20fe7-4e86-4f78-a23c-1bd355da288d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.457904 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/befe42e4-2c02-458c-8465-d0190290e122-certs\") pod \"machine-config-server-c8dqr\" (UID: \"befe42e4-2c02-458c-8465-d0190290e122\") " pod="openshift-machine-config-operator/machine-config-server-c8dqr" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.457966 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/04a68c01-2285-41f7-830f-896a194c343d-proxy-tls\") pod \"machine-config-controller-84d6567774-hvcjw\" (UID: \"04a68c01-2285-41f7-830f-896a194c343d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.457989 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/11795d95-2c15-4a44-9276-6786ec1e5cc1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mglrx\" (UID: \"11795d95-2c15-4a44-9276-6786ec1e5cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.458023 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6b1fa22e-132c-42d6-9327-f6d826d92373-default-certificate\") pod \"router-default-5444994796-d4qvp\" (UID: \"6b1fa22e-132c-42d6-9327-f6d826d92373\") " pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.458048 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75vfk\" (UniqueName: \"kubernetes.io/projected/278a952c-12a9-469b-960b-d68b5f2d2f29-kube-api-access-75vfk\") pod \"cni-sysctl-allowlist-ds-wtlqx\" (UID: \"278a952c-12a9-469b-960b-d68b5f2d2f29\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.458066 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e34b1f8d-2d33-44a6-830b-9d2c9840117c-signing-key\") pod \"service-ca-9c57cc56f-2tzdp\" (UID: \"e34b1f8d-2d33-44a6-830b-9d2c9840117c\") " pod="openshift-service-ca/service-ca-9c57cc56f-2tzdp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.458085 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnfkd\" (UniqueName: \"kubernetes.io/projected/e34b1f8d-2d33-44a6-830b-9d2c9840117c-kube-api-access-hnfkd\") pod \"service-ca-9c57cc56f-2tzdp\" (UID: \"e34b1f8d-2d33-44a6-830b-9d2c9840117c\") " pod="openshift-service-ca/service-ca-9c57cc56f-2tzdp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.458119 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8dc0e19a-9d79-4aa0-a483-f03e6df544f3-tmpfs\") pod \"packageserver-d55dfcdfc-hpx2l\" (UID: \"8dc0e19a-9d79-4aa0-a483-f03e6df544f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.458145 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc5x6\" (UniqueName: \"kubernetes.io/projected/8dc0e19a-9d79-4aa0-a483-f03e6df544f3-kube-api-access-gc5x6\") pod \"packageserver-d55dfcdfc-hpx2l\" (UID: \"8dc0e19a-9d79-4aa0-a483-f03e6df544f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.458169 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/11795d95-2c15-4a44-9276-6786ec1e5cc1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mglrx\" (UID: \"11795d95-2c15-4a44-9276-6786ec1e5cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.458199 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/278a952c-12a9-469b-960b-d68b5f2d2f29-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-wtlqx\" (UID: \"278a952c-12a9-469b-960b-d68b5f2d2f29\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.458219 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbfwv\" (UniqueName: \"kubernetes.io/projected/04a68c01-2285-41f7-830f-896a194c343d-kube-api-access-kbfwv\") pod \"machine-config-controller-84d6567774-hvcjw\" (UID: \"04a68c01-2285-41f7-830f-896a194c343d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.458240 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wfm5\" (UniqueName: \"kubernetes.io/projected/6b1fa22e-132c-42d6-9327-f6d826d92373-kube-api-access-8wfm5\") pod \"router-default-5444994796-d4qvp\" (UID: \"6b1fa22e-132c-42d6-9327-f6d826d92373\") " pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.458260 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42c9b6ce-3059-4c6f-b78a-19f12b2ef3c0-cert\") pod \"ingress-canary-z72vj\" (UID: \"42c9b6ce-3059-4c6f-b78a-19f12b2ef3c0\") " pod="openshift-ingress-canary/ingress-canary-z72vj" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.458310 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-575dn\" (UniqueName: \"kubernetes.io/projected/93186f0d-0a67-42db-b750-a02749ab9f9a-kube-api-access-575dn\") pod \"dns-default-kws2d\" (UID: \"93186f0d-0a67-42db-b750-a02749ab9f9a\") " pod="openshift-dns/dns-default-kws2d" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.458344 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/910e37ce-87c8-4083-a021-e91516eaa546-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d7vq8\" (UID: \"910e37ce-87c8-4083-a021-e91516eaa546\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.456881 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b383ce4b-acbd-4331-9244-27387ddc224a-mountpoint-dir\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.459043 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b383ce4b-acbd-4331-9244-27387ddc224a-plugins-dir\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.459705 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b383ce4b-acbd-4331-9244-27387ddc224a-socket-dir\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.459903 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/278a952c-12a9-469b-960b-d68b5f2d2f29-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-wtlqx\" (UID: \"278a952c-12a9-469b-960b-d68b5f2d2f29\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.459994 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b383ce4b-acbd-4331-9244-27387ddc224a-registration-dir\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.460378 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/04a68c01-2285-41f7-830f-896a194c343d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hvcjw\" (UID: \"04a68c01-2285-41f7-830f-896a194c343d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.461099 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/278a952c-12a9-469b-960b-d68b5f2d2f29-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-wtlqx\" (UID: \"278a952c-12a9-469b-960b-d68b5f2d2f29\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.462415 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93186f0d-0a67-42db-b750-a02749ab9f9a-config-volume\") pod \"dns-default-kws2d\" (UID: \"93186f0d-0a67-42db-b750-a02749ab9f9a\") " pod="openshift-dns/dns-default-kws2d" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.462703 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93832597-9fae-41f4-8228-f3bf3e2ecd88-trusted-ca\") pod \"ingress-operator-5b745b69d9-lbgmc\" (UID: \"93832597-9fae-41f4-8228-f3bf3e2ecd88\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.464324 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/278a952c-12a9-469b-960b-d68b5f2d2f29-ready\") pod \"cni-sysctl-allowlist-ds-wtlqx\" (UID: \"278a952c-12a9-469b-960b-d68b5f2d2f29\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.464484 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b383ce4b-acbd-4331-9244-27387ddc224a-csi-data-dir\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.464981 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e34b1f8d-2d33-44a6-830b-9d2c9840117c-signing-cabundle\") pod \"service-ca-9c57cc56f-2tzdp\" (UID: \"e34b1f8d-2d33-44a6-830b-9d2c9840117c\") " pod="openshift-service-ca/service-ca-9c57cc56f-2tzdp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.466811 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b1fa22e-132c-42d6-9327-f6d826d92373-service-ca-bundle\") pod \"router-default-5444994796-d4qvp\" (UID: \"6b1fa22e-132c-42d6-9327-f6d826d92373\") " pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.474750 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/11795d95-2c15-4a44-9276-6786ec1e5cc1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mglrx\" (UID: \"11795d95-2c15-4a44-9276-6786ec1e5cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.477633 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8dc0e19a-9d79-4aa0-a483-f03e6df544f3-tmpfs\") pod \"packageserver-d55dfcdfc-hpx2l\" (UID: \"8dc0e19a-9d79-4aa0-a483-f03e6df544f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.488595 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dfxc7"] Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.490662 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8dc0e19a-9d79-4aa0-a483-f03e6df544f3-webhook-cert\") pod \"packageserver-d55dfcdfc-hpx2l\" (UID: \"8dc0e19a-9d79-4aa0-a483-f03e6df544f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.493186 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/04a68c01-2285-41f7-830f-896a194c343d-proxy-tls\") pod \"machine-config-controller-84d6567774-hvcjw\" (UID: \"04a68c01-2285-41f7-830f-896a194c343d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.493604 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/befe42e4-2c02-458c-8465-d0190290e122-certs\") pod \"machine-config-server-c8dqr\" (UID: \"befe42e4-2c02-458c-8465-d0190290e122\") " pod="openshift-machine-config-operator/machine-config-server-c8dqr" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.493613 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6b1fa22e-132c-42d6-9327-f6d826d92373-default-certificate\") pod \"router-default-5444994796-d4qvp\" (UID: \"6b1fa22e-132c-42d6-9327-f6d826d92373\") " pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.503991 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e34b1f8d-2d33-44a6-830b-9d2c9840117c-signing-key\") pod \"service-ca-9c57cc56f-2tzdp\" (UID: \"e34b1f8d-2d33-44a6-830b-9d2c9840117c\") " pod="openshift-service-ca/service-ca-9c57cc56f-2tzdp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.504163 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5snj2\" (UniqueName: \"kubernetes.io/projected/910e37ce-87c8-4083-a021-e91516eaa546-kube-api-access-5snj2\") pod \"package-server-manager-789f6589d5-d7vq8\" (UID: \"910e37ce-87c8-4083-a021-e91516eaa546\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.505246 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/93832597-9fae-41f4-8228-f3bf3e2ecd88-metrics-tls\") pod \"ingress-operator-5b745b69d9-lbgmc\" (UID: \"93832597-9fae-41f4-8228-f3bf3e2ecd88\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.505686 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/befe42e4-2c02-458c-8465-d0190290e122-node-bootstrap-token\") pod \"machine-config-server-c8dqr\" (UID: \"befe42e4-2c02-458c-8465-d0190290e122\") " pod="openshift-machine-config-operator/machine-config-server-c8dqr" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.506828 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6b1fa22e-132c-42d6-9327-f6d826d92373-stats-auth\") pod \"router-default-5444994796-d4qvp\" (UID: \"6b1fa22e-132c-42d6-9327-f6d826d92373\") " pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.530262 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/93186f0d-0a67-42db-b750-a02749ab9f9a-metrics-tls\") pod \"dns-default-kws2d\" (UID: \"93186f0d-0a67-42db-b750-a02749ab9f9a\") " pod="openshift-dns/dns-default-kws2d" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.530755 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9ae20fe7-4e86-4f78-a23c-1bd355da288d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lf7l6\" (UID: \"9ae20fe7-4e86-4f78-a23c-1bd355da288d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.533696 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/42c9b6ce-3059-4c6f-b78a-19f12b2ef3c0-cert\") pod \"ingress-canary-z72vj\" (UID: \"42c9b6ce-3059-4c6f-b78a-19f12b2ef3c0\") " pod="openshift-ingress-canary/ingress-canary-z72vj" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.539248 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ae20fe7-4e86-4f78-a23c-1bd355da288d-srv-cert\") pod \"olm-operator-6b444d44fb-lf7l6\" (UID: \"9ae20fe7-4e86-4f78-a23c-1bd355da288d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.540878 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/11795d95-2c15-4a44-9276-6786ec1e5cc1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mglrx\" (UID: \"11795d95-2c15-4a44-9276-6786ec1e5cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.541584 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1fa22e-132c-42d6-9327-f6d826d92373-metrics-certs\") pod \"router-default-5444994796-d4qvp\" (UID: \"6b1fa22e-132c-42d6-9327-f6d826d92373\") " pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.541754 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8dc0e19a-9d79-4aa0-a483-f03e6df544f3-apiservice-cert\") pod \"packageserver-d55dfcdfc-hpx2l\" (UID: \"8dc0e19a-9d79-4aa0-a483-f03e6df544f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.540754 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcltb\" (UniqueName: \"kubernetes.io/projected/cb44d620-0ac4-4b6d-a56c-48ef9c0b2548-kube-api-access-lcltb\") pod \"catalog-operator-68c6474976-m8dll\" (UID: \"cb44d620-0ac4-4b6d-a56c-48ef9c0b2548\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.555399 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlnxj\" (UniqueName: \"kubernetes.io/projected/42c9b6ce-3059-4c6f-b78a-19f12b2ef3c0-kube-api-access-dlnxj\") pod \"ingress-canary-z72vj\" (UID: \"42c9b6ce-3059-4c6f-b78a-19f12b2ef3c0\") " pod="openshift-ingress-canary/ingress-canary-z72vj" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.559377 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/910e37ce-87c8-4083-a021-e91516eaa546-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-d7vq8\" (UID: \"910e37ce-87c8-4083-a021-e91516eaa546\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.561385 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: E1213 06:56:04.562026 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:05.062008597 +0000 UTC m=+73.559187644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.581380 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl"] Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.583171 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75vfk\" (UniqueName: \"kubernetes.io/projected/278a952c-12a9-469b-960b-d68b5f2d2f29-kube-api-access-75vfk\") pod \"cni-sysctl-allowlist-ds-wtlqx\" (UID: \"278a952c-12a9-469b-960b-d68b5f2d2f29\") " pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.592267 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmhxx\" (UniqueName: \"kubernetes.io/projected/befe42e4-2c02-458c-8465-d0190290e122-kube-api-access-hmhxx\") pod \"machine-config-server-c8dqr\" (UID: \"befe42e4-2c02-458c-8465-d0190290e122\") " pod="openshift-machine-config-operator/machine-config-server-c8dqr" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.592536 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.596492 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbfwv\" (UniqueName: \"kubernetes.io/projected/04a68c01-2285-41f7-830f-896a194c343d-kube-api-access-kbfwv\") pod \"machine-config-controller-84d6567774-hvcjw\" (UID: \"04a68c01-2285-41f7-830f-896a194c343d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.599484 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wfm5\" (UniqueName: \"kubernetes.io/projected/6b1fa22e-132c-42d6-9327-f6d826d92373-kube-api-access-8wfm5\") pod \"router-default-5444994796-d4qvp\" (UID: \"6b1fa22e-132c-42d6-9327-f6d826d92373\") " pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.604786 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-c8dqr" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.626538 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-575dn\" (UniqueName: \"kubernetes.io/projected/93186f0d-0a67-42db-b750-a02749ab9f9a-kube-api-access-575dn\") pod \"dns-default-kws2d\" (UID: \"93186f0d-0a67-42db-b750-a02749ab9f9a\") " pod="openshift-dns/dns-default-kws2d" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.645648 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z72vj" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.645915 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h84wn\" (UniqueName: \"kubernetes.io/projected/11795d95-2c15-4a44-9276-6786ec1e5cc1-kube-api-access-h84wn\") pod \"marketplace-operator-79b997595-mglrx\" (UID: \"11795d95-2c15-4a44-9276-6786ec1e5cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.665417 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:04 crc kubenswrapper[4707]: E1213 06:56:04.665822 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:05.16580751 +0000 UTC m=+73.662986557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.683059 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps9mp\" (UniqueName: \"kubernetes.io/projected/93832597-9fae-41f4-8228-f3bf3e2ecd88-kube-api-access-ps9mp\") pod \"ingress-operator-5b745b69d9-lbgmc\" (UID: \"93832597-9fae-41f4-8228-f3bf3e2ecd88\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.712051 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/93832597-9fae-41f4-8228-f3bf3e2ecd88-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lbgmc\" (UID: \"93832597-9fae-41f4-8228-f3bf3e2ecd88\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.730481 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ckpvk" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.732076 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.756828 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dblfl\" (UniqueName: \"kubernetes.io/projected/b383ce4b-acbd-4331-9244-27387ddc224a-kube-api-access-dblfl\") pod \"csi-hostpathplugin-2j5l5\" (UID: \"b383ce4b-acbd-4331-9244-27387ddc224a\") " pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.774687 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: E1213 06:56:04.775355 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:05.275332309 +0000 UTC m=+73.772511366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.780772 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnfkd\" (UniqueName: \"kubernetes.io/projected/e34b1f8d-2d33-44a6-830b-9d2c9840117c-kube-api-access-hnfkd\") pod \"service-ca-9c57cc56f-2tzdp\" (UID: \"e34b1f8d-2d33-44a6-830b-9d2c9840117c\") " pod="openshift-service-ca/service-ca-9c57cc56f-2tzdp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.783308 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msx96\" (UniqueName: \"kubernetes.io/projected/9ae20fe7-4e86-4f78-a23c-1bd355da288d-kube-api-access-msx96\") pod \"olm-operator-6b444d44fb-lf7l6\" (UID: \"9ae20fe7-4e86-4f78-a23c-1bd355da288d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.798237 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.804988 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc5x6\" (UniqueName: \"kubernetes.io/projected/8dc0e19a-9d79-4aa0-a483-f03e6df544f3-kube-api-access-gc5x6\") pod \"packageserver-d55dfcdfc-hpx2l\" (UID: \"8dc0e19a-9d79-4aa0-a483-f03e6df544f3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.815928 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.826490 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.842243 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.842898 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.857640 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.866672 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.874215 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2tzdp" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.876702 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:04 crc kubenswrapper[4707]: E1213 06:56:04.876771 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:05.376752595 +0000 UTC m=+73.873931642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.877070 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:04 crc kubenswrapper[4707]: E1213 06:56:04.877751 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:05.377711138 +0000 UTC m=+73.874890185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.883210 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.907914 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-kws2d" Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.908641 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52v2z"] Dec 13 06:56:04 crc kubenswrapper[4707]: I1213 06:56:04.938646 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.008063 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:05 crc kubenswrapper[4707]: E1213 06:56:05.008855 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:05.508837572 +0000 UTC m=+74.006016609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.109123 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:05 crc kubenswrapper[4707]: E1213 06:56:05.109505 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:05.60949197 +0000 UTC m=+74.106671017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.138665 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp"] Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.195479 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cjjxz"] Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.212923 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:05 crc kubenswrapper[4707]: E1213 06:56:05.213540 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:05.713524888 +0000 UTC m=+74.210703935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.225158 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rnl8c"] Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.246811 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn"] Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.251444 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn"] Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.262063 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4cmd7" event={"ID":"6c241862-435c-471a-a61d-31511d807d8e","Type":"ContainerStarted","Data":"41a6822bba8e34f098abda3c1ce6fee50cbd9f7f4a466a5905cbda6741248255"} Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.262536 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4cmd7" Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.270942 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" event={"ID":"5c02287e-e126-49cd-94a2-5e0596607990","Type":"ContainerStarted","Data":"fbb0b3763fb3867b169a60f79c9320a442c7ec02650f24cd253d82a36fb51e3d"} Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.273191 4707 patch_prober.go:28] interesting pod/console-operator-58897d9998-4cmd7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.273233 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4cmd7" podUID="6c241862-435c-471a-a61d-31511d807d8e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.274934 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs" event={"ID":"8236301f-f078-4d9f-b77e-cc501c121d7f","Type":"ContainerStarted","Data":"3dfc13626b133de4fe9225e681e82f7b6408118d86025c5832aa6049a1438931"} Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.292414 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" event={"ID":"c0c9466a-24c0-47c7-b815-dbe77f75a826","Type":"ContainerStarted","Data":"2055ef73875ba83d0de4d045556a055b061882469c0a417abad66483b2bd8299"} Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.294538 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dfxc7" event={"ID":"12994b98-2a7b-461d-a7bb-9ef25135c20c","Type":"ContainerStarted","Data":"5d935d7e5eec726a44d2acb26bd0ee4eeb19fa1beda4033b38fb821470563618"} Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.297319 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" event={"ID":"6f8ce01d-989d-463c-8973-63d7b9fc78da","Type":"ContainerStarted","Data":"f9c4db0927366a00fd072842b9a98f7dc46bc5fa2d45a8afc021dd90afde7c7a"} Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.298419 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" event={"ID":"44c42d40-af3b-4fba-b43c-042218dfd0ef","Type":"ContainerStarted","Data":"eb86b46c8abb099cc5b27ebb38414ba8c05d9b0843d84bfb5d44e0cbb94058b9"} Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.314214 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:05 crc kubenswrapper[4707]: E1213 06:56:05.314593 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:05.814578785 +0000 UTC m=+74.311757832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.338756 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr"] Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.341868 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d6cfk"] Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.388429 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-svb4r"] Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.415564 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:05 crc kubenswrapper[4707]: E1213 06:56:05.415985 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:05.91594102 +0000 UTC m=+74.413120067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.499940 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fp5b9"] Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.512174 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2hk9"] Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.514945 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9"] Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.525150 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:05 crc kubenswrapper[4707]: E1213 06:56:05.525576 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:06.025560982 +0000 UTC m=+74.522740029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.525799 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x"] Dec 13 06:56:05 crc kubenswrapper[4707]: W1213 06:56:05.529137 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbefe42e4_2c02_458c_8465_d0190290e122.slice/crio-15f8bf5a046bd43d4b96920a4af7907bccc2a88e317e9f61a3f770fa577007ba WatchSource:0}: Error finding container 15f8bf5a046bd43d4b96920a4af7907bccc2a88e317e9f61a3f770fa577007ba: Status 404 returned error can't find the container with id 15f8bf5a046bd43d4b96920a4af7907bccc2a88e317e9f61a3f770fa577007ba Dec 13 06:56:05 crc kubenswrapper[4707]: W1213 06:56:05.588118 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4ae296e_7f7a_4903_b425_729fba19cd77.slice/crio-3a12562a35163e9f2e6b796df4dc41b543363ac387652e2879829b4aa2cd02d7 WatchSource:0}: Error finding container 3a12562a35163e9f2e6b796df4dc41b543363ac387652e2879829b4aa2cd02d7: Status 404 returned error can't find the container with id 3a12562a35163e9f2e6b796df4dc41b543363ac387652e2879829b4aa2cd02d7 Dec 13 06:56:05 crc kubenswrapper[4707]: W1213 06:56:05.594026 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92d25bf9_ab58_4a10_82f8_8e5dd3aa33aa.slice/crio-f7eb8a7809ae53b5a6d94f5932f0b9b89f92088145fd4263eb58aa115e1da0c3 WatchSource:0}: Error finding container f7eb8a7809ae53b5a6d94f5932f0b9b89f92088145fd4263eb58aa115e1da0c3: Status 404 returned error can't find the container with id f7eb8a7809ae53b5a6d94f5932f0b9b89f92088145fd4263eb58aa115e1da0c3 Dec 13 06:56:05 crc kubenswrapper[4707]: W1213 06:56:05.594458 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe022427_e3d6_4213_9d27_67555b2bce9f.slice/crio-9bbfe39bb6e6c074d64f86ce50442d8dc1f18b161d01947eb726c30328da26b3 WatchSource:0}: Error finding container 9bbfe39bb6e6c074d64f86ce50442d8dc1f18b161d01947eb726c30328da26b3: Status 404 returned error can't find the container with id 9bbfe39bb6e6c074d64f86ce50442d8dc1f18b161d01947eb726c30328da26b3 Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.627409 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:05 crc kubenswrapper[4707]: E1213 06:56:05.629003 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:06.128967145 +0000 UTC m=+74.626146202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.629477 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl"] Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.650263 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57"] Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.664168 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg"] Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.672903 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ckpvk"] Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.729295 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:05 crc kubenswrapper[4707]: E1213 06:56:05.729712 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:06.229698805 +0000 UTC m=+74.726877852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.822816 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4cmd7" podStartSLOduration=51.822791163 podStartE2EDuration="51.822791163s" podCreationTimestamp="2025-12-13 06:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:05.814450744 +0000 UTC m=+74.311629791" watchObservedRunningTime="2025-12-13 06:56:05.822791163 +0000 UTC m=+74.319970240" Dec 13 06:56:05 crc kubenswrapper[4707]: E1213 06:56:05.833800 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:06.333771604 +0000 UTC m=+74.830950661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.834056 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.834262 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:05 crc kubenswrapper[4707]: E1213 06:56:05.834815 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:06.334806369 +0000 UTC m=+74.831985406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.898328 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll"] Dec 13 06:56:05 crc kubenswrapper[4707]: I1213 06:56:05.934888 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:05 crc kubenswrapper[4707]: E1213 06:56:05.935324 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:06.435281102 +0000 UTC m=+74.932460149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.037121 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:06 crc kubenswrapper[4707]: E1213 06:56:06.037567 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:06.537549379 +0000 UTC m=+75.034728426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.070240 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z72vj"] Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.141713 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:06 crc kubenswrapper[4707]: E1213 06:56:06.142059 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:06.642044668 +0000 UTC m=+75.139223715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.242624 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:06 crc kubenswrapper[4707]: E1213 06:56:06.242939 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:06.742927231 +0000 UTC m=+75.240106268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.343455 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:06 crc kubenswrapper[4707]: E1213 06:56:06.343644 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:06.84358937 +0000 UTC m=+75.340768417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.343940 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:06 crc kubenswrapper[4707]: E1213 06:56:06.344252 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:06.844244795 +0000 UTC m=+75.341423842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.344795 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cjjxz" event={"ID":"b1585433-7721-4bfc-ae6c-68d963b7f65b","Type":"ContainerStarted","Data":"a70b0ed55091ca14f8c9fcf1785b1c97f6ce6e0b275cdf3396f7af6c7864d60d"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.347469 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fp5b9" event={"ID":"f748007d-42fd-4882-9ead-51d2c65b5373","Type":"ContainerStarted","Data":"e3ce060966d825d3ab6e0ec04af05cada507f2d95c085bcc23372ea0d86928a3"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.371176 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs" event={"ID":"8236301f-f078-4d9f-b77e-cc501c121d7f","Type":"ContainerStarted","Data":"8e62871d91a2eed0701eb1f2455c6b9ed404ba90fcdb08cc7713300a3430e16e"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.378452 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57" event={"ID":"19d61a46-8489-456e-8766-8a3fef9a5f44","Type":"ContainerStarted","Data":"1c56946d2e8d9fd7ace38d33d1129765a7bbaaf79b2d6435098b52a0c1cca429"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.380231 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ckpvk" event={"ID":"5440a9b6-173c-40ee-a4a3-965427ed9cc3","Type":"ContainerStarted","Data":"0f366eced9b252014205ed8af9eb75c9d351d2402cac4d311cfe8af8aebabb28"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.386703 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l"] Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.389978 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" event={"ID":"e5fb8205-458e-4e43-a541-9438509a3fc4","Type":"ContainerStarted","Data":"120a36ef2aa10e9bb5139d2d7cfb93ee282679996b8ef089d9fadf2553cdb060"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.404340 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" event={"ID":"4b25c52d-5663-457b-ae69-0773516986e7","Type":"ContainerStarted","Data":"266525733eb5bd418619be9b42722e55d1dfc7891840b167d28828d488ea81d7"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.405478 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z72vj" event={"ID":"42c9b6ce-3059-4c6f-b78a-19f12b2ef3c0","Type":"ContainerStarted","Data":"a10702fdfaed2107273e1910de5e36d4cbb02b72a036ee5d87e93c459f3c675e"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.412061 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn" event={"ID":"e437c323-4d06-4be9-be5c-3063c078a1e8","Type":"ContainerStarted","Data":"0a00c613c7cd19ba7f96cacb03fa593eff4e23e2d9b85952335a0534c411e76b"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.429600 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" event={"ID":"c0c9466a-24c0-47c7-b815-dbe77f75a826","Type":"ContainerStarted","Data":"c242f9e98f83e3f53a4e349c8bb5b8708bf4735ec0fabbd32ccb6c516544791b"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.430997 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl" event={"ID":"d1864a49-e046-40ca-be97-b4e44acfdfa9","Type":"ContainerStarted","Data":"6ebf254c6c64f23ea92fe69ada284dbafe38d138b27b36f4b1bf4fbefd771ca6"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.431774 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp" event={"ID":"d4ae296e-7f7a-4903-b425-729fba19cd77","Type":"ContainerStarted","Data":"3a12562a35163e9f2e6b796df4dc41b543363ac387652e2879829b4aa2cd02d7"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.435536 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-c8dqr" event={"ID":"befe42e4-2c02-458c-8465-d0190290e122","Type":"ContainerStarted","Data":"15f8bf5a046bd43d4b96920a4af7907bccc2a88e317e9f61a3f770fa577007ba"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.438981 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rnl8c" event={"ID":"fe022427-e3d6-4213-9d27-67555b2bce9f","Type":"ContainerStarted","Data":"9bbfe39bb6e6c074d64f86ce50442d8dc1f18b161d01947eb726c30328da26b3"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.443799 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" event={"ID":"5c02287e-e126-49cd-94a2-5e0596607990","Type":"ContainerStarted","Data":"1e44ea01558b30004cb5df6669f607b61e744159447b820e6aeb9d8dfcfa3871"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.445107 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" event={"ID":"bb92b62b-9ab2-46d8-bd2b-cee2d1133778","Type":"ContainerStarted","Data":"e22ac4e6804f3b61feb1cbb48b7644e6cb7616ea256961ec18a35436ed314c04"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.445753 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn" event={"ID":"92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa","Type":"ContainerStarted","Data":"f7eb8a7809ae53b5a6d94f5932f0b9b89f92088145fd4263eb58aa115e1da0c3"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.446983 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" event={"ID":"cc0d338c-f40c-45a1-9529-0eca9141065d","Type":"ContainerStarted","Data":"c42df11017871eba5b5e838c6a06f5bf6f45526a8b06dc87bc5eee3eca133a90"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.451881 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:06 crc kubenswrapper[4707]: E1213 06:56:06.452180 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:06.952030393 +0000 UTC m=+75.449209440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.452327 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:06 crc kubenswrapper[4707]: E1213 06:56:06.452880 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:06.952869773 +0000 UTC m=+75.450048820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.461361 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" event={"ID":"ba542d52-1a53-45d0-99c7-a56d028d0b42","Type":"ContainerStarted","Data":"b016c806024a79081fed9bdc66c356bef9fae597efe4a4d5835fc47028f501db"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.477826 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6c4h6" event={"ID":"c73c65c7-bef5-4db7-8e02-bbe2b39f1758","Type":"ContainerStarted","Data":"5a176a31565db48574161fe88fac0e16d7c61624ec0755be8fc3a1856ca06087"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.479985 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mglrx"] Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.480440 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" event={"ID":"4b01406f-154d-4f81-8371-f998b19039cd","Type":"ContainerStarted","Data":"43dcfe3f898badb157c2ffe6e269c2682efbcf575e31e30472f2ec8201fb5b2a"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.481907 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" event={"ID":"adc3663d-54be-4d1f-a40d-14575ee3f644","Type":"ContainerStarted","Data":"751d02ce6be7765438636cc31fcb1a086b3e027f879217effba7cd3af1a0ef71"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.482721 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-d4qvp" event={"ID":"6b1fa22e-132c-42d6-9327-f6d826d92373","Type":"ContainerStarted","Data":"a5110d2a495f05330e4990965f2e021a2db23af75a9da8ba42b567c0198ac836"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.483508 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-svb4r" event={"ID":"b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7","Type":"ContainerStarted","Data":"563dee971a37edd934033a7150375133b9eaadaa9850bae61e4841070953f687"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.490896 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll" event={"ID":"cb44d620-0ac4-4b6d-a56c-48ef9c0b2548","Type":"ContainerStarted","Data":"b26713bdfed1e6e20a423a72bf20b4011b65b8b45b6498978c80e1cf3d22cd56"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.498874 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" event={"ID":"278a952c-12a9-469b-960b-d68b5f2d2f29","Type":"ContainerStarted","Data":"e3a0f0ac402076cba334dc1ac9b94b3346b66c066274e6d3195ad7160d8b1f23"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.500210 4707 generic.go:334] "Generic (PLEG): container finished" podID="836ef8b4-4ea3-44d4-a07a-724ea1940c40" containerID="96afa87ce1c65b89a85415f62251b4e10534e8f44c0074b4cd9ed69b4940598b" exitCode=0 Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.500278 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" event={"ID":"836ef8b4-4ea3-44d4-a07a-724ea1940c40","Type":"ContainerDied","Data":"96afa87ce1c65b89a85415f62251b4e10534e8f44c0074b4cd9ed69b4940598b"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.501305 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" event={"ID":"45efad7b-cca0-4262-8c64-74d2cbbbfe5b","Type":"ContainerStarted","Data":"7e644882bf2368c51fdaf9eb9d5dde22f36ef40ee1230c097d2c04cec7495435"} Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.502004 4707 patch_prober.go:28] interesting pod/console-operator-58897d9998-4cmd7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.502045 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4cmd7" podUID="6c241862-435c-471a-a61d-31511d807d8e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.552982 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:06 crc kubenswrapper[4707]: E1213 06:56:06.555367 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:07.055345634 +0000 UTC m=+75.552524691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.589728 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2tzdp"] Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.614657 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-kws2d"] Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.655388 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:06 crc kubenswrapper[4707]: E1213 06:56:06.655710 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:07.155698665 +0000 UTC m=+75.652877712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.662270 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8"] Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.756059 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:06 crc kubenswrapper[4707]: E1213 06:56:06.756315 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:07.256295651 +0000 UTC m=+75.753474698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.756424 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:06 crc kubenswrapper[4707]: E1213 06:56:06.756722 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:07.256714221 +0000 UTC m=+75.753893268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.856889 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:06 crc kubenswrapper[4707]: E1213 06:56:06.857111 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:07.357079212 +0000 UTC m=+75.854258269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.857622 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:06 crc kubenswrapper[4707]: E1213 06:56:06.857883 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:07.357868501 +0000 UTC m=+75.855047618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.891081 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-6c4h6" podStartSLOduration=51.891039441 podStartE2EDuration="51.891039441s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:06.889902734 +0000 UTC m=+75.387081791" watchObservedRunningTime="2025-12-13 06:56:06.891039441 +0000 UTC m=+75.388218498" Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.961357 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:06 crc kubenswrapper[4707]: E1213 06:56:06.961779 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:07.461764116 +0000 UTC m=+75.958943163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.964298 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6"] Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.982078 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2j5l5"] Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.984318 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc"] Dec 13 06:56:06 crc kubenswrapper[4707]: I1213 06:56:06.991807 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw"] Dec 13 06:56:07 crc kubenswrapper[4707]: W1213 06:56:07.006271 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ae20fe7_4e86_4f78_a23c_1bd355da288d.slice/crio-e45264837542cac33c916d600540bc07dcbc49eac43d762d982f6ddd268e0072 WatchSource:0}: Error finding container e45264837542cac33c916d600540bc07dcbc49eac43d762d982f6ddd268e0072: Status 404 returned error can't find the container with id e45264837542cac33c916d600540bc07dcbc49eac43d762d982f6ddd268e0072 Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.062643 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:07 crc kubenswrapper[4707]: E1213 06:56:07.063275 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:07.563262174 +0000 UTC m=+76.060441221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.163974 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:07 crc kubenswrapper[4707]: E1213 06:56:07.164342 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:07.664327512 +0000 UTC m=+76.161506559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.265503 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:07 crc kubenswrapper[4707]: E1213 06:56:07.265861 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:07.76584606 +0000 UTC m=+76.263025107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.366326 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:07 crc kubenswrapper[4707]: E1213 06:56:07.366497 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:07.866476648 +0000 UTC m=+76.363655695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.366682 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:07 crc kubenswrapper[4707]: E1213 06:56:07.366989 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:07.86697673 +0000 UTC m=+76.364155777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.467297 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:07 crc kubenswrapper[4707]: E1213 06:56:07.467536 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:07.967509215 +0000 UTC m=+76.464688262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.467903 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:07 crc kubenswrapper[4707]: E1213 06:56:07.468293 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:07.968278343 +0000 UTC m=+76.465457390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.527735 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8" event={"ID":"910e37ce-87c8-4083-a021-e91516eaa546","Type":"ContainerStarted","Data":"9faa0b40373153e6532006a6e45ab8efe70431a17414438dcac842d9f357b7a2"} Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.532837 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw" event={"ID":"04a68c01-2285-41f7-830f-896a194c343d","Type":"ContainerStarted","Data":"f7d2e51d751cfb33e432c55f4dae318b282f13f7c572cf40a9d72bef1dc4df52"} Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.533736 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" event={"ID":"11795d95-2c15-4a44-9276-6786ec1e5cc1","Type":"ContainerStarted","Data":"0dca281a318168efe7fe774b77ca2f4a8bb60f9777a87c269e8c9f80c809aafb"} Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.535655 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" event={"ID":"b383ce4b-acbd-4331-9244-27387ddc224a","Type":"ContainerStarted","Data":"36068b513bbc727aa744ddede31dd051db6c560dbd965d10b56a614863699ff6"} Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.536786 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" event={"ID":"93832597-9fae-41f4-8228-f3bf3e2ecd88","Type":"ContainerStarted","Data":"a854d45763ba4cae2b917bd09be9e543a8117c9c37f9e9b7018ece6a515e5096"} Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.538465 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" event={"ID":"9ae20fe7-4e86-4f78-a23c-1bd355da288d","Type":"ContainerStarted","Data":"e45264837542cac33c916d600540bc07dcbc49eac43d762d982f6ddd268e0072"} Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.541789 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kws2d" event={"ID":"93186f0d-0a67-42db-b750-a02749ab9f9a","Type":"ContainerStarted","Data":"d0823602e06af0897e2cc5887b85f8cfb1cc727bb0e00e539bcafbbb780ddc18"} Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.551453 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dfxc7" event={"ID":"12994b98-2a7b-461d-a7bb-9ef25135c20c","Type":"ContainerStarted","Data":"c6cc72f25e5139bd95f3c162d957b162877379477f7353798c49bae90c99c579"} Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.553224 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2tzdp" event={"ID":"e34b1f8d-2d33-44a6-830b-9d2c9840117c","Type":"ContainerStarted","Data":"4a8c02937e7199b45a5867efd7c4b5569d816a310c1c59267da2f1ff8e00f470"} Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.553990 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" event={"ID":"8dc0e19a-9d79-4aa0-a483-f03e6df544f3","Type":"ContainerStarted","Data":"5a1aea7c19f6a21684da99836f20f6db6fa42dff7cdd88fad128bb79bf5b817e"} Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.555162 4707 generic.go:334] "Generic (PLEG): container finished" podID="ba542d52-1a53-45d0-99c7-a56d028d0b42" containerID="b016c806024a79081fed9bdc66c356bef9fae597efe4a4d5835fc47028f501db" exitCode=0 Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.556100 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" event={"ID":"ba542d52-1a53-45d0-99c7-a56d028d0b42","Type":"ContainerDied","Data":"b016c806024a79081fed9bdc66c356bef9fae597efe4a4d5835fc47028f501db"} Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.556255 4707 patch_prober.go:28] interesting pod/console-operator-58897d9998-4cmd7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.556301 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4cmd7" podUID="6c241862-435c-471a-a61d-31511d807d8e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.577649 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:07 crc kubenswrapper[4707]: E1213 06:56:07.577736 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:08.07772055 +0000 UTC m=+76.574899597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.578324 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:07 crc kubenswrapper[4707]: E1213 06:56:07.595233 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:08.095210797 +0000 UTC m=+76.592389854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.679288 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:07 crc kubenswrapper[4707]: E1213 06:56:07.679572 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:08.179532736 +0000 UTC m=+76.676711783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.780513 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:07 crc kubenswrapper[4707]: E1213 06:56:07.781105 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:08.281085335 +0000 UTC m=+76.778264462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.881081 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:07 crc kubenswrapper[4707]: E1213 06:56:07.881237 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:08.38122051 +0000 UTC m=+76.878399557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.881310 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:07 crc kubenswrapper[4707]: E1213 06:56:07.881600 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:08.381592279 +0000 UTC m=+76.878771316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.987675 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:07 crc kubenswrapper[4707]: E1213 06:56:07.987864 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:08.487814499 +0000 UTC m=+76.984993556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:07 crc kubenswrapper[4707]: I1213 06:56:07.988045 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:07 crc kubenswrapper[4707]: E1213 06:56:07.988310 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:08.48829913 +0000 UTC m=+76.985478177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.089741 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:08 crc kubenswrapper[4707]: E1213 06:56:08.089912 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:08.58988179 +0000 UTC m=+77.087060847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.089945 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:08 crc kubenswrapper[4707]: E1213 06:56:08.090413 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:08.590397893 +0000 UTC m=+77.087576940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.191760 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:08 crc kubenswrapper[4707]: E1213 06:56:08.192075 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:08.692041504 +0000 UTC m=+77.189220561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.192241 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:08 crc kubenswrapper[4707]: E1213 06:56:08.192574 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:08.692560996 +0000 UTC m=+77.189740043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.293574 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:08 crc kubenswrapper[4707]: E1213 06:56:08.293764 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:08.793739337 +0000 UTC m=+77.290918384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.294040 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:08 crc kubenswrapper[4707]: E1213 06:56:08.294381 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:08.794368042 +0000 UTC m=+77.291547089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.395149 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:08 crc kubenswrapper[4707]: E1213 06:56:08.395574 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:08.895554352 +0000 UTC m=+77.392733399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.496752 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:08 crc kubenswrapper[4707]: E1213 06:56:08.497229 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:08.997214784 +0000 UTC m=+77.494393841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.560330 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" event={"ID":"6f8ce01d-989d-463c-8973-63d7b9fc78da","Type":"ContainerStarted","Data":"964a8ff26f3b726934815e4889ca1956543566b0ddde0ecee60d6f84a050b98d"} Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.561481 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ckpvk" event={"ID":"5440a9b6-173c-40ee-a4a3-965427ed9cc3","Type":"ContainerStarted","Data":"8e380a37d5c608555129c8a63a7b94da877c261c4980b3ae1aeb44a74427e30e"} Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.562665 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2hk9" event={"ID":"98c17acd-c234-4d92-855e-313cde4274cb","Type":"ContainerStarted","Data":"fa72892362050b7692ce739b47cb8581ab8c917216c9ba8b0487c68450504def"} Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.563431 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.579971 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.592831 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9hhm7" podStartSLOduration=54.592813052 podStartE2EDuration="54.592813052s" podCreationTimestamp="2025-12-13 06:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:07.595728139 +0000 UTC m=+76.092907186" watchObservedRunningTime="2025-12-13 06:56:08.592813052 +0000 UTC m=+77.089992099" Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.598436 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:08 crc kubenswrapper[4707]: E1213 06:56:08.598783 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:09.098762653 +0000 UTC m=+77.595941710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.652685 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dsdvs" podStartSLOduration=53.652669428 podStartE2EDuration="53.652669428s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:08.594286897 +0000 UTC m=+77.091465934" watchObservedRunningTime="2025-12-13 06:56:08.652669428 +0000 UTC m=+77.149848475" Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.700440 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:08 crc kubenswrapper[4707]: E1213 06:56:08.700889 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:09.200874246 +0000 UTC m=+77.698053303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.701502 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" podStartSLOduration=54.70148253 podStartE2EDuration="54.70148253s" podCreationTimestamp="2025-12-13 06:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:08.674318753 +0000 UTC m=+77.171497810" watchObservedRunningTime="2025-12-13 06:56:08.70148253 +0000 UTC m=+77.198661567" Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.801350 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:08 crc kubenswrapper[4707]: E1213 06:56:08.801929 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:09.301909493 +0000 UTC m=+77.799088540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:08 crc kubenswrapper[4707]: I1213 06:56:08.903104 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:08 crc kubenswrapper[4707]: E1213 06:56:08.903519 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:09.403502163 +0000 UTC m=+77.900681210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.004701 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:09 crc kubenswrapper[4707]: E1213 06:56:09.005059 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:09.505044262 +0000 UTC m=+78.002223299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.106622 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:09 crc kubenswrapper[4707]: E1213 06:56:09.106932 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:09.606917369 +0000 UTC m=+78.104096406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.207810 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:09 crc kubenswrapper[4707]: E1213 06:56:09.208044 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:09.708015048 +0000 UTC m=+78.205194105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.208171 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:09 crc kubenswrapper[4707]: E1213 06:56:09.208571 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:09.70855451 +0000 UTC m=+78.205733557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.309050 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:09 crc kubenswrapper[4707]: E1213 06:56:09.309169 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:09.809147257 +0000 UTC m=+78.306326314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.309349 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:09 crc kubenswrapper[4707]: E1213 06:56:09.309601 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:09.809594017 +0000 UTC m=+78.306773064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.410226 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:09 crc kubenswrapper[4707]: E1213 06:56:09.410418 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:09.910387329 +0000 UTC m=+78.407566376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.410496 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:09 crc kubenswrapper[4707]: E1213 06:56:09.410840 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:09.910828269 +0000 UTC m=+78.408007316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.511927 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:09 crc kubenswrapper[4707]: E1213 06:56:09.512257 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:10.012241185 +0000 UTC m=+78.509420232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.568671 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll" event={"ID":"cb44d620-0ac4-4b6d-a56c-48ef9c0b2548","Type":"ContainerStarted","Data":"2394ec5558d91c453216607caf5e742f10df67121250885cdd3a644ad2857c08"} Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.569995 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" event={"ID":"adc3663d-54be-4d1f-a40d-14575ee3f644","Type":"ContainerStarted","Data":"eacfecb7cbe1031610fc8f5e2a5205bc54630838a9807dcbd0cc5324b4a1bf5a"} Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.571082 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fp5b9" event={"ID":"f748007d-42fd-4882-9ead-51d2c65b5373","Type":"ContainerStarted","Data":"dd7aeb293a427ffa64de47d2c34f2849c154c4a3f12f422c4c9df17e0cafbde4"} Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.572242 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn" event={"ID":"e437c323-4d06-4be9-be5c-3063c078a1e8","Type":"ContainerStarted","Data":"00ffa1600e557d5f12bef2397b274f8a96decbb2ae4e30b987a09bfd7abebd1d"} Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.573351 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" event={"ID":"4b01406f-154d-4f81-8371-f998b19039cd","Type":"ContainerStarted","Data":"385ed48d586b2582cd6d2af95e7aeb15a949a73d10569581b4c41ea56811eea8"} Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.574526 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" event={"ID":"e5fb8205-458e-4e43-a541-9438509a3fc4","Type":"ContainerStarted","Data":"92a505d9a88e27cc46ea219e47f521856bd7a1733ece96486c954b514edbec9d"} Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.575930 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-svb4r" event={"ID":"b3ed5e4f-6a03-4160-9035-5d1ab9d8d5e7","Type":"ContainerStarted","Data":"d93e85e7c859e70e5418c070ac0ff30a6678e3e3086b83b536945a34d8a8f4a7"} Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.577168 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl" event={"ID":"d1864a49-e046-40ca-be97-b4e44acfdfa9","Type":"ContainerStarted","Data":"23abb88a9658de36905423bbb01ec2a33ff9ce72c845d063524a38fc944694a9"} Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.578453 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn" event={"ID":"92d25bf9-ab58-4a10-82f8-8e5dd3aa33aa","Type":"ContainerStarted","Data":"07b535fd78d584132d80e9e8f9c51056f849c0f47ab8a69b66b8b7db1d4af8ef"} Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.579866 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" event={"ID":"cc0d338c-f40c-45a1-9529-0eca9141065d","Type":"ContainerStarted","Data":"6c51494e4af9c366e1a389c3b93c13e7dcc203b78489e519ffe35907a558f5c8"} Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.581220 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp" event={"ID":"d4ae296e-7f7a-4903-b425-729fba19cd77","Type":"ContainerStarted","Data":"ce0d232c7b02a89f8040c037fc6721381a945aa3f261175f0dd0dab76d2d39cb"} Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.582989 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-c8dqr" event={"ID":"befe42e4-2c02-458c-8465-d0190290e122","Type":"ContainerStarted","Data":"2e9f1ea747d84387410efae4ee46878e0bf45514168fc9f65c451a1fc984ad7d"} Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.583170 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-dfxc7" Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.584841 4707 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfxc7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.584883 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfxc7" podUID="12994b98-2a7b-461d-a7bb-9ef25135c20c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.615175 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:09 crc kubenswrapper[4707]: E1213 06:56:09.616348 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:10.116327845 +0000 UTC m=+78.613506912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.716935 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:09 crc kubenswrapper[4707]: E1213 06:56:09.717147 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:10.217117256 +0000 UTC m=+78.714296313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.717329 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:09 crc kubenswrapper[4707]: E1213 06:56:09.717716 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:10.21769958 +0000 UTC m=+78.714878627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.818905 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:09 crc kubenswrapper[4707]: E1213 06:56:09.819250 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:10.319233629 +0000 UTC m=+78.816412676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:09 crc kubenswrapper[4707]: I1213 06:56:09.920086 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:09 crc kubenswrapper[4707]: E1213 06:56:09.920460 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:10.420432279 +0000 UTC m=+78.917611326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.021155 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:10 crc kubenswrapper[4707]: E1213 06:56:10.021349 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:10.521320913 +0000 UTC m=+79.018499960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.021413 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:10 crc kubenswrapper[4707]: E1213 06:56:10.021927 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:10.521910367 +0000 UTC m=+79.019089414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.122191 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:10 crc kubenswrapper[4707]: E1213 06:56:10.122393 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:10.62236366 +0000 UTC m=+79.119542707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.122518 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:10 crc kubenswrapper[4707]: E1213 06:56:10.122918 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:10.622903283 +0000 UTC m=+79.120082330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.224404 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:10 crc kubenswrapper[4707]: E1213 06:56:10.224832 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:10.7248102 +0000 UTC m=+79.221989247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.326040 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:10 crc kubenswrapper[4707]: E1213 06:56:10.326422 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:10.826406171 +0000 UTC m=+79.323585208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.430685 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:10 crc kubenswrapper[4707]: E1213 06:56:10.430866 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:10.930834719 +0000 UTC m=+79.428013766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.430991 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:10 crc kubenswrapper[4707]: E1213 06:56:10.431340 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:10.931332421 +0000 UTC m=+79.428511468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.532421 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:10 crc kubenswrapper[4707]: E1213 06:56:10.532735 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:11.032713536 +0000 UTC m=+79.529892593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.533014 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:10 crc kubenswrapper[4707]: E1213 06:56:10.533410 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:11.033401102 +0000 UTC m=+79.530580149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.591272 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" event={"ID":"45efad7b-cca0-4262-8c64-74d2cbbbfe5b","Type":"ContainerStarted","Data":"bcba363903b448249efa342e501a78fb96653edc1f7f19fa46cca0422d8ad102"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.595901 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kws2d" event={"ID":"93186f0d-0a67-42db-b750-a02749ab9f9a","Type":"ContainerStarted","Data":"33fe6658e78e7e189c6a882160b36ff40a38da6fd2d2c9668065d87f5c477aea"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.602331 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" event={"ID":"bb92b62b-9ab2-46d8-bd2b-cee2d1133778","Type":"ContainerStarted","Data":"179c2a9b1023af2b5fbfebdda542fa434bf6c08fae43463255241f4c2637e852"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.603773 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw" event={"ID":"04a68c01-2285-41f7-830f-896a194c343d","Type":"ContainerStarted","Data":"a95c436927235afd4c1a67ad66353a6f85d43589e4cfab2746615e31e316a8ed"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.605936 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" event={"ID":"11795d95-2c15-4a44-9276-6786ec1e5cc1","Type":"ContainerStarted","Data":"dfae44c57b3f058cf596f099f0abe96a8830d565514b49373bab2aa895e17c56"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.607355 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cjjxz" event={"ID":"b1585433-7721-4bfc-ae6c-68d963b7f65b","Type":"ContainerStarted","Data":"c9c5450dcc5d66f049788f938023e77c6c1664cbdbdcf3d61c00226bf65c42b6"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.610097 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z72vj" event={"ID":"42c9b6ce-3059-4c6f-b78a-19f12b2ef3c0","Type":"ContainerStarted","Data":"cf02351cc2c94158bf3a429ed2e87770533260fea04f08f0c6498c1a84daaf41"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.611313 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" event={"ID":"4b25c52d-5663-457b-ae69-0773516986e7","Type":"ContainerStarted","Data":"c5362805efa5899669d2c27f101c18674b5640d2ea396cd3183da9dcee7e6a00"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.612872 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" event={"ID":"93832597-9fae-41f4-8228-f3bf3e2ecd88","Type":"ContainerStarted","Data":"c81a51daa91f40413c57990960eb4b278d23db2319ed47885d65d3e35c0d6fa7"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.614876 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57" event={"ID":"19d61a46-8489-456e-8766-8a3fef9a5f44","Type":"ContainerStarted","Data":"825440fb518b1af6d11b2afa2f93ce87f7eaf7a0ad1f3fd2309921a5cbf9843b"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.616453 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-d4qvp" event={"ID":"6b1fa22e-132c-42d6-9327-f6d826d92373","Type":"ContainerStarted","Data":"2b7d326771147137f8857c6215101b1c25e22f54dbb3078a9b52ca595208dc36"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.618197 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rnl8c" event={"ID":"fe022427-e3d6-4213-9d27-67555b2bce9f","Type":"ContainerStarted","Data":"8776350c7d4eb77826304b2095d5b7942258647c891e2553cd09c54263167662"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.619624 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8" event={"ID":"910e37ce-87c8-4083-a021-e91516eaa546","Type":"ContainerStarted","Data":"ea66e86637111008685b566f078055293f8de9293db210096761a88a7bbe13ab"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.624667 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2hk9" event={"ID":"98c17acd-c234-4d92-855e-313cde4274cb","Type":"ContainerStarted","Data":"2431396ff7316615f83985ccaf0d643060eae22933a06392c14520e8cd99e5b5"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.625679 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" event={"ID":"8dc0e19a-9d79-4aa0-a483-f03e6df544f3","Type":"ContainerStarted","Data":"f0f4dccbd979902bc40aeef102a41a51d8b6c1d1d3ef4bd4b1088a41972ade4e"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.626719 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" event={"ID":"9ae20fe7-4e86-4f78-a23c-1bd355da288d","Type":"ContainerStarted","Data":"e279174bacf7bbb909ca3f94bc8f0899f2ee7d81b6521db3f74d33ddaa8a6fb3"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.627813 4707 generic.go:334] "Generic (PLEG): container finished" podID="44c42d40-af3b-4fba-b43c-042218dfd0ef" containerID="8cfa363ac95cd82b6bc3108fe675aa453680f4e6ae08723b1ad7ff3c1b0c563f" exitCode=0 Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.627923 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" event={"ID":"44c42d40-af3b-4fba-b43c-042218dfd0ef","Type":"ContainerDied","Data":"8cfa363ac95cd82b6bc3108fe675aa453680f4e6ae08723b1ad7ff3c1b0c563f"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.629140 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" event={"ID":"ba542d52-1a53-45d0-99c7-a56d028d0b42","Type":"ContainerStarted","Data":"d3a4ee41d319c2d5a7c049f196b461541522c5fcd53ab6415419997093508ca0"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.630041 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2tzdp" event={"ID":"e34b1f8d-2d33-44a6-830b-9d2c9840117c","Type":"ContainerStarted","Data":"2815dbde60b05759422d2607ce6e16dbb0d426ae2e2b3b104a737cd76c74edb6"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.631926 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" event={"ID":"278a952c-12a9-469b-960b-d68b5f2d2f29","Type":"ContainerStarted","Data":"6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5"} Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.639416 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.639339 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.638465 4707 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfxc7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.641063 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfxc7" podUID="12994b98-2a7b-461d-a7bb-9ef25135c20c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 13 06:56:10 crc kubenswrapper[4707]: E1213 06:56:10.639384 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:11.139373207 +0000 UTC m=+79.636552254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.641422 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:10 crc kubenswrapper[4707]: E1213 06:56:10.641789 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:11.141781134 +0000 UTC m=+79.638960181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.659374 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" podStartSLOduration=9.659356453000001 podStartE2EDuration="9.659356453s" podCreationTimestamp="2025-12-13 06:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:10.656384012 +0000 UTC m=+79.153563059" watchObservedRunningTime="2025-12-13 06:56:10.659356453 +0000 UTC m=+79.156535500" Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.670887 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-dfxc7" podStartSLOduration=55.670864187 podStartE2EDuration="55.670864187s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:09.596786419 +0000 UTC m=+78.093965466" watchObservedRunningTime="2025-12-13 06:56:10.670864187 +0000 UTC m=+79.168043234" Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.697379 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zw7fp" podStartSLOduration=55.697363158 podStartE2EDuration="55.697363158s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:10.695087004 +0000 UTC m=+79.192266051" watchObservedRunningTime="2025-12-13 06:56:10.697363158 +0000 UTC m=+79.194542205" Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.732774 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.744435 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:10 crc kubenswrapper[4707]: E1213 06:56:10.745446 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:11.245426433 +0000 UTC m=+79.742605480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.848581 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:10 crc kubenswrapper[4707]: E1213 06:56:10.848930 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:11.348918259 +0000 UTC m=+79.846097306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:10 crc kubenswrapper[4707]: I1213 06:56:10.951297 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:10 crc kubenswrapper[4707]: E1213 06:56:10.951676 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:11.451644556 +0000 UTC m=+79.948823603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.053595 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:11 crc kubenswrapper[4707]: E1213 06:56:11.054358 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:11.554338282 +0000 UTC m=+80.051517400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.155211 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:11 crc kubenswrapper[4707]: E1213 06:56:11.155619 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:11.655600495 +0000 UTC m=+80.152779542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.256419 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:11 crc kubenswrapper[4707]: E1213 06:56:11.256803 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:11.756783405 +0000 UTC m=+80.253962472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.357879 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:11 crc kubenswrapper[4707]: E1213 06:56:11.358852 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:11.858834686 +0000 UTC m=+80.356013733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.359299 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:11 crc kubenswrapper[4707]: E1213 06:56:11.359611 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:11.859604065 +0000 UTC m=+80.356783112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.460963 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:11 crc kubenswrapper[4707]: E1213 06:56:11.461568 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:11.961546983 +0000 UTC m=+80.458726030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.563333 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:11 crc kubenswrapper[4707]: E1213 06:56:11.563866 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:12.06385455 +0000 UTC m=+80.561033597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.646655 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" event={"ID":"836ef8b4-4ea3-44d4-a07a-724ea1940c40","Type":"ContainerStarted","Data":"daee51ec0bf92c72e498e8d87b9954b392870c8823f90a3deb8d5967c0d1be3a"} Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.652308 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" event={"ID":"b383ce4b-acbd-4331-9244-27387ddc224a","Type":"ContainerStarted","Data":"d622f30446c5a3506dedb78ae3df9857545dee2c73f202aca7ef6c56f7cd6fea"} Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.655026 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.664385 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:11 crc kubenswrapper[4707]: E1213 06:56:11.664525 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:12.164501717 +0000 UTC m=+80.661680764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.664669 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:11 crc kubenswrapper[4707]: E1213 06:56:11.664935 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:12.164923397 +0000 UTC m=+80.662102444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.673557 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.735285 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-g7h57" podStartSLOduration=56.735265323 podStartE2EDuration="56.735265323s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:11.691087431 +0000 UTC m=+80.188266478" watchObservedRunningTime="2025-12-13 06:56:11.735265323 +0000 UTC m=+80.232444370" Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.759534 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rnl8c" podStartSLOduration=56.759502241 podStartE2EDuration="56.759502241s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:11.752445162 +0000 UTC m=+80.249624209" watchObservedRunningTime="2025-12-13 06:56:11.759502241 +0000 UTC m=+80.256681278" Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.760305 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-l46tr" podStartSLOduration=56.76030036 podStartE2EDuration="56.76030036s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:11.727492358 +0000 UTC m=+80.224671405" watchObservedRunningTime="2025-12-13 06:56:11.76030036 +0000 UTC m=+80.257479407" Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.766424 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:11 crc kubenswrapper[4707]: E1213 06:56:11.768000 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:12.267984683 +0000 UTC m=+80.765163730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.805866 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-wtlqx"] Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.806327 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll" podStartSLOduration=56.806312906 podStartE2EDuration="56.806312906s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:11.803435927 +0000 UTC m=+80.300614974" watchObservedRunningTime="2025-12-13 06:56:11.806312906 +0000 UTC m=+80.303491943" Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.830908 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" podStartSLOduration=56.830892501 podStartE2EDuration="56.830892501s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:11.828203527 +0000 UTC m=+80.325382584" watchObservedRunningTime="2025-12-13 06:56:11.830892501 +0000 UTC m=+80.328071538" Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.868967 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:11 crc kubenswrapper[4707]: E1213 06:56:11.869416 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:12.369402109 +0000 UTC m=+80.866581156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.941361 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-d6cfk" podStartSLOduration=56.941215629 podStartE2EDuration="56.941215629s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:11.880204746 +0000 UTC m=+80.377383793" watchObservedRunningTime="2025-12-13 06:56:11.941215629 +0000 UTC m=+80.438394686" Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.941692 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gfptl" podStartSLOduration=56.941677591 podStartE2EDuration="56.941677591s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:11.923272372 +0000 UTC m=+80.420451419" watchObservedRunningTime="2025-12-13 06:56:11.941677591 +0000 UTC m=+80.438856638" Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.980059 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:11 crc kubenswrapper[4707]: E1213 06:56:11.980229 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:12.480175258 +0000 UTC m=+80.977354305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:11 crc kubenswrapper[4707]: I1213 06:56:11.980427 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:11 crc kubenswrapper[4707]: E1213 06:56:11.980716 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:12.48070413 +0000 UTC m=+80.977883177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:11.998417 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-svb4r" podStartSLOduration=56.979388479 podStartE2EDuration="56.979388479s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:11.94584389 +0000 UTC m=+80.443022937" watchObservedRunningTime="2025-12-13 06:56:11.979388479 +0000 UTC m=+80.476567526" Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.000241 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qr5gn" podStartSLOduration=58.000230815 podStartE2EDuration="58.000230815s" podCreationTimestamp="2025-12-13 06:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:12.000079232 +0000 UTC m=+80.497258289" watchObservedRunningTime="2025-12-13 06:56:12.000230815 +0000 UTC m=+80.497409862" Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.077049 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-c8dqr" podStartSLOduration=11.077033355 podStartE2EDuration="11.077033355s" podCreationTimestamp="2025-12-13 06:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:12.023257054 +0000 UTC m=+80.520436101" watchObservedRunningTime="2025-12-13 06:56:12.077033355 +0000 UTC m=+80.574212402" Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.081190 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:12 crc kubenswrapper[4707]: E1213 06:56:12.081719 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:12.581677876 +0000 UTC m=+81.078856923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.110315 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bg8hn" podStartSLOduration=57.110296928 podStartE2EDuration="57.110296928s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:12.107924881 +0000 UTC m=+80.605103928" watchObservedRunningTime="2025-12-13 06:56:12.110296928 +0000 UTC m=+80.607475975" Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.111779 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" podStartSLOduration=57.111769283 podStartE2EDuration="57.111769283s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:12.075874097 +0000 UTC m=+80.573053154" watchObservedRunningTime="2025-12-13 06:56:12.111769283 +0000 UTC m=+80.608948330" Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.176698 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" podStartSLOduration=58.176683889 podStartE2EDuration="58.176683889s" podCreationTimestamp="2025-12-13 06:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:12.171170738 +0000 UTC m=+80.668349785" watchObservedRunningTime="2025-12-13 06:56:12.176683889 +0000 UTC m=+80.673862936" Dec 13 06:56:12 crc kubenswrapper[4707]: E1213 06:56:12.183726 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:12.683711536 +0000 UTC m=+81.180890583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.183372 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.292371 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:12 crc kubenswrapper[4707]: E1213 06:56:12.292726 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:12.792711513 +0000 UTC m=+81.289890560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.395027 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:12 crc kubenswrapper[4707]: E1213 06:56:12.395367 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:12.895355108 +0000 UTC m=+81.392534155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.496545 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:12 crc kubenswrapper[4707]: E1213 06:56:12.496702 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:12.996677372 +0000 UTC m=+81.493856409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.497005 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:12 crc kubenswrapper[4707]: E1213 06:56:12.497296 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:12.997288437 +0000 UTC m=+81.494467484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.598021 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:12 crc kubenswrapper[4707]: E1213 06:56:12.598209 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:13.09816035 +0000 UTC m=+81.595339397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.598305 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:12 crc kubenswrapper[4707]: E1213 06:56:12.598630 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:13.098618731 +0000 UTC m=+81.595797778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.699104 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:12 crc kubenswrapper[4707]: E1213 06:56:12.699253 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:13.199233718 +0000 UTC m=+81.696412765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.699343 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:12 crc kubenswrapper[4707]: E1213 06:56:12.699671 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:13.199663208 +0000 UTC m=+81.696842255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.800389 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:12 crc kubenswrapper[4707]: E1213 06:56:12.800507 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:13.30048035 +0000 UTC m=+81.797659397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.800609 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:12 crc kubenswrapper[4707]: E1213 06:56:12.800860 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:13.300850689 +0000 UTC m=+81.798029836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.833202 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.833800 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.858231 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.861064 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.861207 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 13 06:56:12 crc kubenswrapper[4707]: W1213 06:56:12.865209 4707 reflector.go:561] object-"openshift-kube-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-kube-apiserver": no relationship found between node 'crc' and this object Dec 13 06:56:12 crc kubenswrapper[4707]: E1213 06:56:12.865374 4707 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 13 06:56:12 crc kubenswrapper[4707]: W1213 06:56:12.865226 4707 reflector.go:561] object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n": failed to list *v1.Secret: secrets "installer-sa-dockercfg-5pr6n" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-kube-apiserver": no relationship found between node 'crc' and this object Dec 13 06:56:12 crc kubenswrapper[4707]: E1213 06:56:12.865514 4707 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-5pr6n\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"installer-sa-dockercfg-5pr6n\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.884868 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-d4qvp" podStartSLOduration=57.8848538 podStartE2EDuration="57.8848538s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:12.833058886 +0000 UTC m=+81.330237943" watchObservedRunningTime="2025-12-13 06:56:12.8848538 +0000 UTC m=+81.382032847" Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.885039 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.901793 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:12 crc kubenswrapper[4707]: E1213 06:56:12.901993 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:13.401965307 +0000 UTC m=+81.899144354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.902083 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.902117 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3b21338-6240-404f-af83-ffed1f5f3fe9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f3b21338-6240-404f-af83-ffed1f5f3fe9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.902168 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3b21338-6240-404f-af83-ffed1f5f3fe9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f3b21338-6240-404f-af83-ffed1f5f3fe9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 06:56:12 crc kubenswrapper[4707]: E1213 06:56:12.902452 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:13.402439879 +0000 UTC m=+81.899618926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.920677 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-2tzdp" podStartSLOduration=57.920658193 podStartE2EDuration="57.920658193s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:12.888428635 +0000 UTC m=+81.385607682" watchObservedRunningTime="2025-12-13 06:56:12.920658193 +0000 UTC m=+81.417837240" Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.922750 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" podStartSLOduration=58.922743392 podStartE2EDuration="58.922743392s" podCreationTimestamp="2025-12-13 06:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:12.921323478 +0000 UTC m=+81.418502525" watchObservedRunningTime="2025-12-13 06:56:12.922743392 +0000 UTC m=+81.419922439" Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.986600 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" podStartSLOduration=57.986584263 podStartE2EDuration="57.986584263s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:12.985892607 +0000 UTC m=+81.483071654" watchObservedRunningTime="2025-12-13 06:56:12.986584263 +0000 UTC m=+81.483763310" Dec 13 06:56:12 crc kubenswrapper[4707]: I1213 06:56:12.987299 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" podStartSLOduration=57.98729513 podStartE2EDuration="57.98729513s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:12.950266008 +0000 UTC m=+81.447445065" watchObservedRunningTime="2025-12-13 06:56:12.98729513 +0000 UTC m=+81.484474177" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.002687 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:13 crc kubenswrapper[4707]: E1213 06:56:13.002861 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:13.50283458 +0000 UTC m=+82.000013627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.002964 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3b21338-6240-404f-af83-ffed1f5f3fe9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f3b21338-6240-404f-af83-ffed1f5f3fe9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.003062 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.003082 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3b21338-6240-404f-af83-ffed1f5f3fe9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f3b21338-6240-404f-af83-ffed1f5f3fe9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.003090 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3b21338-6240-404f-af83-ffed1f5f3fe9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f3b21338-6240-404f-af83-ffed1f5f3fe9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 06:56:13 crc kubenswrapper[4707]: E1213 06:56:13.003333 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:13.503324832 +0000 UTC m=+82.000503879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.103761 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:13 crc kubenswrapper[4707]: E1213 06:56:13.104128 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:13.604104213 +0000 UTC m=+82.101283260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.206088 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:13 crc kubenswrapper[4707]: E1213 06:56:13.206425 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:13.70641448 +0000 UTC m=+82.203593527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.210092 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4cmd7" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.243321 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.243907 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.261668 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.261906 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.265687 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.307525 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.308024 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f00213e0-4841-456c-8c78-2ab1692951fb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f00213e0-4841-456c-8c78-2ab1692951fb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.308151 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f00213e0-4841-456c-8c78-2ab1692951fb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f00213e0-4841-456c-8c78-2ab1692951fb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 06:56:13 crc kubenswrapper[4707]: E1213 06:56:13.309169 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:13.809146878 +0000 UTC m=+82.306325925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.409719 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:13 crc kubenswrapper[4707]: E1213 06:56:13.410404 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:13.910387399 +0000 UTC m=+82.407566446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.410581 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f00213e0-4841-456c-8c78-2ab1692951fb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f00213e0-4841-456c-8c78-2ab1692951fb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.410676 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f00213e0-4841-456c-8c78-2ab1692951fb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f00213e0-4841-456c-8c78-2ab1692951fb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.411090 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f00213e0-4841-456c-8c78-2ab1692951fb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f00213e0-4841-456c-8c78-2ab1692951fb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.434411 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.435075 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.438084 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f00213e0-4841-456c-8c78-2ab1692951fb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f00213e0-4841-456c-8c78-2ab1692951fb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.438172 4707 patch_prober.go:28] interesting pod/console-f9d7485db-6c4h6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.438212 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6c4h6" podUID="c73c65c7-bef5-4db7-8e02-bbe2b39f1758" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.511651 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:13 crc kubenswrapper[4707]: E1213 06:56:13.511665 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:14.011644102 +0000 UTC m=+82.508823149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.511961 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:13 crc kubenswrapper[4707]: E1213 06:56:13.512313 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:14.012303047 +0000 UTC m=+82.509482174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.967726 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.969119 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.969173 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.969229 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.969296 4707 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfxc7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.969341 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfxc7" podUID="12994b98-2a7b-461d-a7bb-9ef25135c20c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 13 06:56:13 crc kubenswrapper[4707]: E1213 06:56:13.969442 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:14.469427557 +0000 UTC m=+82.966606604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.969553 4707 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfxc7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.969582 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dfxc7" podUID="12994b98-2a7b-461d-a7bb-9ef25135c20c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.974644 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.974792 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.977473 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" gracePeriod=30 Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.981383 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:13 crc kubenswrapper[4707]: I1213 06:56:13.985434 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3b21338-6240-404f-af83-ffed1f5f3fe9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f3b21338-6240-404f-af83-ffed1f5f3fe9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.016995 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gkw57"] Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.024148 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gkw57" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.029454 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.033613 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gkw57"] Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.050270 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.070456 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc6r8\" (UniqueName: \"kubernetes.io/projected/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-kube-api-access-jc6r8\") pod \"community-operators-gkw57\" (UID: \"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64\") " pod="openshift-marketplace/community-operators-gkw57" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.070606 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.070747 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-catalog-content\") pod \"community-operators-gkw57\" (UID: \"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64\") " pod="openshift-marketplace/community-operators-gkw57" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.070807 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-utilities\") pod \"community-operators-gkw57\" (UID: \"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64\") " pod="openshift-marketplace/community-operators-gkw57" Dec 13 06:56:14 crc kubenswrapper[4707]: E1213 06:56:14.072610 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:14.572594045 +0000 UTC m=+83.069773092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.142071 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.173486 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.174050 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc6r8\" (UniqueName: \"kubernetes.io/projected/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-kube-api-access-jc6r8\") pod \"community-operators-gkw57\" (UID: \"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64\") " pod="openshift-marketplace/community-operators-gkw57" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.174198 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-catalog-content\") pod \"community-operators-gkw57\" (UID: \"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64\") " pod="openshift-marketplace/community-operators-gkw57" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.174232 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-utilities\") pod \"community-operators-gkw57\" (UID: \"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64\") " pod="openshift-marketplace/community-operators-gkw57" Dec 13 06:56:14 crc kubenswrapper[4707]: E1213 06:56:14.175130 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:14.675089887 +0000 UTC m=+83.172268934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.176249 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-utilities\") pod \"community-operators-gkw57\" (UID: \"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64\") " pod="openshift-marketplace/community-operators-gkw57" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.176354 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-catalog-content\") pod \"community-operators-gkw57\" (UID: \"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64\") " pod="openshift-marketplace/community-operators-gkw57" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.222038 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc6r8\" (UniqueName: \"kubernetes.io/projected/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-kube-api-access-jc6r8\") pod \"community-operators-gkw57\" (UID: \"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64\") " pod="openshift-marketplace/community-operators-gkw57" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.272436 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t685j"] Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.273449 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t685j" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.279366 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.279442 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrmtn\" (UniqueName: \"kubernetes.io/projected/31a88c0f-62fb-4209-8acf-b106c6d03d6e-kube-api-access-qrmtn\") pod \"community-operators-t685j\" (UID: \"31a88c0f-62fb-4209-8acf-b106c6d03d6e\") " pod="openshift-marketplace/community-operators-t685j" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.279459 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a88c0f-62fb-4209-8acf-b106c6d03d6e-catalog-content\") pod \"community-operators-t685j\" (UID: \"31a88c0f-62fb-4209-8acf-b106c6d03d6e\") " pod="openshift-marketplace/community-operators-t685j" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.279491 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a88c0f-62fb-4209-8acf-b106c6d03d6e-utilities\") pod \"community-operators-t685j\" (UID: \"31a88c0f-62fb-4209-8acf-b106c6d03d6e\") " pod="openshift-marketplace/community-operators-t685j" Dec 13 06:56:14 crc kubenswrapper[4707]: E1213 06:56:14.286312 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:14.786294106 +0000 UTC m=+83.283473203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.309097 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t685j"] Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.351171 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gkw57" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.381596 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.382034 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrmtn\" (UniqueName: \"kubernetes.io/projected/31a88c0f-62fb-4209-8acf-b106c6d03d6e-kube-api-access-qrmtn\") pod \"community-operators-t685j\" (UID: \"31a88c0f-62fb-4209-8acf-b106c6d03d6e\") " pod="openshift-marketplace/community-operators-t685j" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.382068 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a88c0f-62fb-4209-8acf-b106c6d03d6e-catalog-content\") pod \"community-operators-t685j\" (UID: \"31a88c0f-62fb-4209-8acf-b106c6d03d6e\") " pod="openshift-marketplace/community-operators-t685j" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.382111 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a88c0f-62fb-4209-8acf-b106c6d03d6e-utilities\") pod \"community-operators-t685j\" (UID: \"31a88c0f-62fb-4209-8acf-b106c6d03d6e\") " pod="openshift-marketplace/community-operators-t685j" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.382572 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a88c0f-62fb-4209-8acf-b106c6d03d6e-utilities\") pod \"community-operators-t685j\" (UID: \"31a88c0f-62fb-4209-8acf-b106c6d03d6e\") " pod="openshift-marketplace/community-operators-t685j" Dec 13 06:56:14 crc kubenswrapper[4707]: E1213 06:56:14.383100 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:14.883079642 +0000 UTC m=+83.380258689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.383617 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a88c0f-62fb-4209-8acf-b106c6d03d6e-catalog-content\") pod \"community-operators-t685j\" (UID: \"31a88c0f-62fb-4209-8acf-b106c6d03d6e\") " pod="openshift-marketplace/community-operators-t685j" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.444849 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrmtn\" (UniqueName: \"kubernetes.io/projected/31a88c0f-62fb-4209-8acf-b106c6d03d6e-kube-api-access-qrmtn\") pod \"community-operators-t685j\" (UID: \"31a88c0f-62fb-4209-8acf-b106c6d03d6e\") " pod="openshift-marketplace/community-operators-t685j" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.483006 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:14 crc kubenswrapper[4707]: E1213 06:56:14.483563 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:14.983543345 +0000 UTC m=+83.480722462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.501847 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z47zb"] Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.503447 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z47zb" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.511738 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.518166 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z47zb"] Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.564071 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.584557 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:14 crc kubenswrapper[4707]: E1213 06:56:14.585112 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:15.085092854 +0000 UTC m=+83.582271911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:14 crc kubenswrapper[4707]: E1213 06:56:14.603274 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:56:14 crc kubenswrapper[4707]: E1213 06:56:14.606356 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:56:14 crc kubenswrapper[4707]: E1213 06:56:14.616091 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:56:14 crc kubenswrapper[4707]: E1213 06:56:14.616325 4707 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.619554 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t685j" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.685808 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2cbk\" (UniqueName: \"kubernetes.io/projected/1c15410b-e411-44d7-8663-d19dfc2a76b6-kube-api-access-c2cbk\") pod \"certified-operators-z47zb\" (UID: \"1c15410b-e411-44d7-8663-d19dfc2a76b6\") " pod="openshift-marketplace/certified-operators-z47zb" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.685859 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.685880 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c15410b-e411-44d7-8663-d19dfc2a76b6-catalog-content\") pod \"certified-operators-z47zb\" (UID: \"1c15410b-e411-44d7-8663-d19dfc2a76b6\") " pod="openshift-marketplace/certified-operators-z47zb" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.685917 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c15410b-e411-44d7-8663-d19dfc2a76b6-utilities\") pod \"certified-operators-z47zb\" (UID: \"1c15410b-e411-44d7-8663-d19dfc2a76b6\") " pod="openshift-marketplace/certified-operators-z47zb" Dec 13 06:56:14 crc kubenswrapper[4707]: E1213 06:56:14.686226 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:15.186213013 +0000 UTC m=+83.683392060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.689992 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rhlhc"] Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.690964 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.718861 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhlhc"] Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.783544 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.786575 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.786959 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-catalog-content\") pod \"certified-operators-rhlhc\" (UID: \"5a4673eb-3880-4df1-b8d3-95f31c7af7dd\") " pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 06:56:14 crc kubenswrapper[4707]: E1213 06:56:14.787124 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:15.287097367 +0000 UTC m=+83.784276474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.787290 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2cbk\" (UniqueName: \"kubernetes.io/projected/1c15410b-e411-44d7-8663-d19dfc2a76b6-kube-api-access-c2cbk\") pod \"certified-operators-z47zb\" (UID: \"1c15410b-e411-44d7-8663-d19dfc2a76b6\") " pod="openshift-marketplace/certified-operators-z47zb" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.787724 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:14 crc kubenswrapper[4707]: E1213 06:56:14.788022 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:15.288011209 +0000 UTC m=+83.785190256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.788242 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c15410b-e411-44d7-8663-d19dfc2a76b6-catalog-content\") pod \"certified-operators-z47zb\" (UID: \"1c15410b-e411-44d7-8663-d19dfc2a76b6\") " pod="openshift-marketplace/certified-operators-z47zb" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.788682 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c15410b-e411-44d7-8663-d19dfc2a76b6-catalog-content\") pod \"certified-operators-z47zb\" (UID: \"1c15410b-e411-44d7-8663-d19dfc2a76b6\") " pod="openshift-marketplace/certified-operators-z47zb" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.788725 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-utilities\") pod \"certified-operators-rhlhc\" (UID: \"5a4673eb-3880-4df1-b8d3-95f31c7af7dd\") " pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.788827 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c15410b-e411-44d7-8663-d19dfc2a76b6-utilities\") pod \"certified-operators-z47zb\" (UID: \"1c15410b-e411-44d7-8663-d19dfc2a76b6\") " pod="openshift-marketplace/certified-operators-z47zb" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.788852 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-562nf\" (UniqueName: \"kubernetes.io/projected/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-kube-api-access-562nf\") pod \"certified-operators-rhlhc\" (UID: \"5a4673eb-3880-4df1-b8d3-95f31c7af7dd\") " pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.789265 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c15410b-e411-44d7-8663-d19dfc2a76b6-utilities\") pod \"certified-operators-z47zb\" (UID: \"1c15410b-e411-44d7-8663-d19dfc2a76b6\") " pod="openshift-marketplace/certified-operators-z47zb" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.800549 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.816229 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.821058 4707 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lf7l6 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.821096 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" podUID="9ae20fe7-4e86-4f78-a23c-1bd355da288d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.821542 4707 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lf7l6 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.821565 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" podUID="9ae20fe7-4e86-4f78-a23c-1bd355da288d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.822719 4707 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lf7l6 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.822745 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" podUID="9ae20fe7-4e86-4f78-a23c-1bd355da288d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.830789 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.832665 4707 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hpx2l container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.832694 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" podUID="8dc0e19a-9d79-4aa0-a483-f03e6df544f3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.832748 4707 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hpx2l container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.832759 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" podUID="8dc0e19a-9d79-4aa0-a483-f03e6df544f3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.832927 4707 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hpx2l container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.832968 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" podUID="8dc0e19a-9d79-4aa0-a483-f03e6df544f3" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.839744 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2cbk\" (UniqueName: \"kubernetes.io/projected/1c15410b-e411-44d7-8663-d19dfc2a76b6-kube-api-access-c2cbk\") pod \"certified-operators-z47zb\" (UID: \"1c15410b-e411-44d7-8663-d19dfc2a76b6\") " pod="openshift-marketplace/certified-operators-z47zb" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.861385 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.862233 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.862290 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.889588 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.889738 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-utilities\") pod \"certified-operators-rhlhc\" (UID: \"5a4673eb-3880-4df1-b8d3-95f31c7af7dd\") " pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.889779 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-562nf\" (UniqueName: \"kubernetes.io/projected/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-kube-api-access-562nf\") pod \"certified-operators-rhlhc\" (UID: \"5a4673eb-3880-4df1-b8d3-95f31c7af7dd\") " pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.889820 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-catalog-content\") pod \"certified-operators-rhlhc\" (UID: \"5a4673eb-3880-4df1-b8d3-95f31c7af7dd\") " pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.890243 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-catalog-content\") pod \"certified-operators-rhlhc\" (UID: \"5a4673eb-3880-4df1-b8d3-95f31c7af7dd\") " pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 06:56:14 crc kubenswrapper[4707]: E1213 06:56:14.890305 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:15.390292185 +0000 UTC m=+83.887471232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.890493 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-utilities\") pod \"certified-operators-rhlhc\" (UID: \"5a4673eb-3880-4df1-b8d3-95f31c7af7dd\") " pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.903237 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.924277 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-562nf\" (UniqueName: \"kubernetes.io/projected/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-kube-api-access-562nf\") pod \"certified-operators-rhlhc\" (UID: \"5a4673eb-3880-4df1-b8d3-95f31c7af7dd\") " pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 06:56:14 crc kubenswrapper[4707]: I1213 06:56:14.994595 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:14 crc kubenswrapper[4707]: E1213 06:56:14.995911 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:15.495899621 +0000 UTC m=+83.993078668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.009705 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f00213e0-4841-456c-8c78-2ab1692951fb","Type":"ContainerStarted","Data":"32aa1c6306cae22d4df71c56b56bbea367f9d630f3a627bc85c34487d20e0a91"} Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.025916 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.033582 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ckpvk" event={"ID":"5440a9b6-173c-40ee-a4a3-965427ed9cc3","Type":"ContainerStarted","Data":"ef0562c84fbf05ce82f6fa11f53aa5ebd356c8f14a8081456b132d7cf5a9bca1"} Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.050947 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f3b21338-6240-404f-af83-ffed1f5f3fe9","Type":"ContainerStarted","Data":"3ba70fea9703d5696f8bcbb2eaef967705425e0e4a638157006dc65965a99887"} Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.097703 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:15 crc kubenswrapper[4707]: E1213 06:56:15.098102 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:15.598088316 +0000 UTC m=+84.095267363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.112067 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-m8dll" Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.127530 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z47zb" Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.197822 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=1.19780087 podStartE2EDuration="1.19780087s" podCreationTimestamp="2025-12-13 06:56:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:15.196481599 +0000 UTC m=+83.693660646" watchObservedRunningTime="2025-12-13 06:56:15.19780087 +0000 UTC m=+83.694979917" Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.225447 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t685j"] Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.238610 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:15 crc kubenswrapper[4707]: E1213 06:56:15.249699 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:15.749677806 +0000 UTC m=+84.246856853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.344300 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:15 crc kubenswrapper[4707]: E1213 06:56:15.344731 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:15.8447119 +0000 UTC m=+84.341890947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.423456 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.427797 4707 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7ctgj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.427858 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" podUID="ba542d52-1a53-45d0-99c7-a56d028d0b42" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.428247 4707 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7ctgj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.428302 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" podUID="ba542d52-1a53-45d0-99c7-a56d028d0b42" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.428628 4707 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7ctgj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.428644 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" podUID="ba542d52-1a53-45d0-99c7-a56d028d0b42" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.445551 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:15 crc kubenswrapper[4707]: E1213 06:56:15.446721 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:15.94670958 +0000 UTC m=+84.443888627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.547426 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:15 crc kubenswrapper[4707]: E1213 06:56:15.547678 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:16.047646704 +0000 UTC m=+84.544825781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.547846 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:15 crc kubenswrapper[4707]: E1213 06:56:15.548342 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:16.0483249 +0000 UTC m=+84.545503947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.650314 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:15 crc kubenswrapper[4707]: E1213 06:56:15.651064 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:16.151046408 +0000 UTC m=+84.648225455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.752507 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:15 crc kubenswrapper[4707]: E1213 06:56:15.752908 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:16.252871693 +0000 UTC m=+84.750050740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.775001 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gkw57"] Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.861428 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.861480 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.867464 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:15 crc kubenswrapper[4707]: E1213 06:56:15.867848 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:16.367831182 +0000 UTC m=+84.865010229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.898478 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhlhc"] Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.969408 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:15 crc kubenswrapper[4707]: E1213 06:56:15.969752 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:16.46973823 +0000 UTC m=+84.966917287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:15 crc kubenswrapper[4707]: I1213 06:56:15.984285 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z47zb"] Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.057714 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z47zb" event={"ID":"1c15410b-e411-44d7-8663-d19dfc2a76b6","Type":"ContainerStarted","Data":"757a63c1cf672cd88ce0a4488670ba5de60014b2a7f1d906f8dbba9ea2150a69"} Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.058418 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhlhc" event={"ID":"5a4673eb-3880-4df1-b8d3-95f31c7af7dd","Type":"ContainerStarted","Data":"6ad26c4ccb4d3ab72779933226ab5ddc44b692dc908ed6b4e4cde5454d5e58bc"} Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.062064 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkw57" event={"ID":"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64","Type":"ContainerStarted","Data":"263d872a9bb16289d5f3a7de029df71a06668c8222680a9aa9dadc8a45d55265"} Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.066261 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t685j" event={"ID":"31a88c0f-62fb-4209-8acf-b106c6d03d6e","Type":"ContainerStarted","Data":"d9225773aff29dbb8816520b1a46f0f688b17b3d2d1c9af642ee1933de237063"} Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.066853 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.072933 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.073038 4707 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mglrx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.073078 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" podUID="11795d95-2c15-4a44-9276-6786ec1e5cc1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 13 06:56:16 crc kubenswrapper[4707]: E1213 06:56:16.073222 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:16.573208045 +0000 UTC m=+85.070387092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.086531 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" podStartSLOduration=61.086513212 podStartE2EDuration="1m1.086513212s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:16.086078431 +0000 UTC m=+84.583257478" watchObservedRunningTime="2025-12-13 06:56:16.086513212 +0000 UTC m=+84.583692259" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.128197 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-z72vj" podStartSLOduration=15.128181644 podStartE2EDuration="15.128181644s" podCreationTimestamp="2025-12-13 06:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:16.106987789 +0000 UTC m=+84.604166836" watchObservedRunningTime="2025-12-13 06:56:16.128181644 +0000 UTC m=+84.625360691" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.174865 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:16 crc kubenswrapper[4707]: E1213 06:56:16.177036 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:16.677022868 +0000 UTC m=+85.174201915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.262696 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-pwlxr" podStartSLOduration=61.262681488 podStartE2EDuration="1m1.262681488s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:16.132882946 +0000 UTC m=+84.630062003" watchObservedRunningTime="2025-12-13 06:56:16.262681488 +0000 UTC m=+84.759860535" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.263096 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-75wd8"] Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.264072 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.266730 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.278582 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:16 crc kubenswrapper[4707]: E1213 06:56:16.278759 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:16.778729291 +0000 UTC m=+85.275908338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.278874 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:16 crc kubenswrapper[4707]: E1213 06:56:16.279151 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:16.779143211 +0000 UTC m=+85.276322258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.314579 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-75wd8"] Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.379328 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.379676 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb1a1a41-b86e-4de7-af57-7862986b3037-catalog-content\") pod \"redhat-marketplace-75wd8\" (UID: \"fb1a1a41-b86e-4de7-af57-7862986b3037\") " pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.379798 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmhrn\" (UniqueName: \"kubernetes.io/projected/fb1a1a41-b86e-4de7-af57-7862986b3037-kube-api-access-lmhrn\") pod \"redhat-marketplace-75wd8\" (UID: \"fb1a1a41-b86e-4de7-af57-7862986b3037\") " pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.379911 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb1a1a41-b86e-4de7-af57-7862986b3037-utilities\") pod \"redhat-marketplace-75wd8\" (UID: \"fb1a1a41-b86e-4de7-af57-7862986b3037\") " pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 06:56:16 crc kubenswrapper[4707]: E1213 06:56:16.380127 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:16.880114066 +0000 UTC m=+85.377293113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.481430 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb1a1a41-b86e-4de7-af57-7862986b3037-catalog-content\") pod \"redhat-marketplace-75wd8\" (UID: \"fb1a1a41-b86e-4de7-af57-7862986b3037\") " pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.482797 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.482896 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmhrn\" (UniqueName: \"kubernetes.io/projected/fb1a1a41-b86e-4de7-af57-7862986b3037-kube-api-access-lmhrn\") pod \"redhat-marketplace-75wd8\" (UID: \"fb1a1a41-b86e-4de7-af57-7862986b3037\") " pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.483028 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb1a1a41-b86e-4de7-af57-7862986b3037-utilities\") pod \"redhat-marketplace-75wd8\" (UID: \"fb1a1a41-b86e-4de7-af57-7862986b3037\") " pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.483386 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb1a1a41-b86e-4de7-af57-7862986b3037-utilities\") pod \"redhat-marketplace-75wd8\" (UID: \"fb1a1a41-b86e-4de7-af57-7862986b3037\") " pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.482725 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb1a1a41-b86e-4de7-af57-7862986b3037-catalog-content\") pod \"redhat-marketplace-75wd8\" (UID: \"fb1a1a41-b86e-4de7-af57-7862986b3037\") " pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 06:56:16 crc kubenswrapper[4707]: E1213 06:56:16.483738 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:16.983726474 +0000 UTC m=+85.480905521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.505444 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmhrn\" (UniqueName: \"kubernetes.io/projected/fb1a1a41-b86e-4de7-af57-7862986b3037-kube-api-access-lmhrn\") pod \"redhat-marketplace-75wd8\" (UID: \"fb1a1a41-b86e-4de7-af57-7862986b3037\") " pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.584104 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:16 crc kubenswrapper[4707]: E1213 06:56:16.584205 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:17.084183588 +0000 UTC m=+85.581362655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.584637 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:16 crc kubenswrapper[4707]: E1213 06:56:16.584901 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:17.084893125 +0000 UTC m=+85.582072162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.612292 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.661775 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cwcjf"] Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.662785 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.673525 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwcjf"] Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.686027 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:16 crc kubenswrapper[4707]: E1213 06:56:16.686580 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:17.186565127 +0000 UTC m=+85.683744174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.787376 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:16 crc kubenswrapper[4707]: E1213 06:56:16.787706 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:17.287694846 +0000 UTC m=+85.784873893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.787856 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dxwl\" (UniqueName: \"kubernetes.io/projected/90f36b9c-fa8d-4257-ad64-2ee8a487d037-kube-api-access-5dxwl\") pod \"redhat-marketplace-cwcjf\" (UID: \"90f36b9c-fa8d-4257-ad64-2ee8a487d037\") " pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.787934 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f36b9c-fa8d-4257-ad64-2ee8a487d037-catalog-content\") pod \"redhat-marketplace-cwcjf\" (UID: \"90f36b9c-fa8d-4257-ad64-2ee8a487d037\") " pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.787998 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f36b9c-fa8d-4257-ad64-2ee8a487d037-utilities\") pod \"redhat-marketplace-cwcjf\" (UID: \"90f36b9c-fa8d-4257-ad64-2ee8a487d037\") " pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.850563 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-75wd8"] Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.862124 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:16 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:16 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:16 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.862173 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.889633 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:16 crc kubenswrapper[4707]: E1213 06:56:16.889889 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:17.38985859 +0000 UTC m=+85.887037637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.889978 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f36b9c-fa8d-4257-ad64-2ee8a487d037-catalog-content\") pod \"redhat-marketplace-cwcjf\" (UID: \"90f36b9c-fa8d-4257-ad64-2ee8a487d037\") " pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.890108 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f36b9c-fa8d-4257-ad64-2ee8a487d037-utilities\") pod \"redhat-marketplace-cwcjf\" (UID: \"90f36b9c-fa8d-4257-ad64-2ee8a487d037\") " pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.890347 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.890448 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dxwl\" (UniqueName: \"kubernetes.io/projected/90f36b9c-fa8d-4257-ad64-2ee8a487d037-kube-api-access-5dxwl\") pod \"redhat-marketplace-cwcjf\" (UID: \"90f36b9c-fa8d-4257-ad64-2ee8a487d037\") " pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.890479 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f36b9c-fa8d-4257-ad64-2ee8a487d037-catalog-content\") pod \"redhat-marketplace-cwcjf\" (UID: \"90f36b9c-fa8d-4257-ad64-2ee8a487d037\") " pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:56:16 crc kubenswrapper[4707]: E1213 06:56:16.890772 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:17.390756881 +0000 UTC m=+85.887936008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.890852 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f36b9c-fa8d-4257-ad64-2ee8a487d037-utilities\") pod \"redhat-marketplace-cwcjf\" (UID: \"90f36b9c-fa8d-4257-ad64-2ee8a487d037\") " pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.912576 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dxwl\" (UniqueName: \"kubernetes.io/projected/90f36b9c-fa8d-4257-ad64-2ee8a487d037-kube-api-access-5dxwl\") pod \"redhat-marketplace-cwcjf\" (UID: \"90f36b9c-fa8d-4257-ad64-2ee8a487d037\") " pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.986541 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.991152 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:16 crc kubenswrapper[4707]: E1213 06:56:16.991300 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:17.491281226 +0000 UTC m=+85.988460273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:16 crc kubenswrapper[4707]: I1213 06:56:16.991542 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:16 crc kubenswrapper[4707]: E1213 06:56:16.991879 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:17.49187189 +0000 UTC m=+85.989050927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.071786 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75wd8" event={"ID":"fb1a1a41-b86e-4de7-af57-7862986b3037","Type":"ContainerStarted","Data":"b97964b51cc49b18e4bcf3a69d49d59d1a608b9661f9fc032e083f5e969f6b10"} Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.072423 4707 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mglrx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.072459 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" podUID="11795d95-2c15-4a44-9276-6786ec1e5cc1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.099117 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:17 crc kubenswrapper[4707]: E1213 06:56:17.099269 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:17.599246358 +0000 UTC m=+86.096425405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.099390 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:17 crc kubenswrapper[4707]: E1213 06:56:17.099685 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:17.599675478 +0000 UTC m=+86.096854525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.200541 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:17 crc kubenswrapper[4707]: E1213 06:56:17.200697 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:17.700674334 +0000 UTC m=+86.197853411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.200837 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:17 crc kubenswrapper[4707]: E1213 06:56:17.201193 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:17.701181876 +0000 UTC m=+86.198360923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.260804 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2t9v8"] Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.262049 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.264548 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.287195 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2t9v8"] Dec 13 06:56:17 crc kubenswrapper[4707]: E1213 06:56:17.304272 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:17.804254402 +0000 UTC m=+86.301433449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.304300 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.304518 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.304557 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f152c64-57be-4ac0-8422-d5d504f02cf8-catalog-content\") pod \"redhat-operators-2t9v8\" (UID: \"6f152c64-57be-4ac0-8422-d5d504f02cf8\") " pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.304606 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn7c4\" (UniqueName: \"kubernetes.io/projected/6f152c64-57be-4ac0-8422-d5d504f02cf8-kube-api-access-mn7c4\") pod \"redhat-operators-2t9v8\" (UID: \"6f152c64-57be-4ac0-8422-d5d504f02cf8\") " pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.304664 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f152c64-57be-4ac0-8422-d5d504f02cf8-utilities\") pod \"redhat-operators-2t9v8\" (UID: \"6f152c64-57be-4ac0-8422-d5d504f02cf8\") " pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 06:56:17 crc kubenswrapper[4707]: E1213 06:56:17.304909 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:17.804902187 +0000 UTC m=+86.302081224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.364823 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwcjf"] Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.405663 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:17 crc kubenswrapper[4707]: E1213 06:56:17.405818 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:17.905788731 +0000 UTC m=+86.402967778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.405919 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f152c64-57be-4ac0-8422-d5d504f02cf8-utilities\") pod \"redhat-operators-2t9v8\" (UID: \"6f152c64-57be-4ac0-8422-d5d504f02cf8\") " pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.405982 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.406020 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f152c64-57be-4ac0-8422-d5d504f02cf8-catalog-content\") pod \"redhat-operators-2t9v8\" (UID: \"6f152c64-57be-4ac0-8422-d5d504f02cf8\") " pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.406059 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn7c4\" (UniqueName: \"kubernetes.io/projected/6f152c64-57be-4ac0-8422-d5d504f02cf8-kube-api-access-mn7c4\") pod \"redhat-operators-2t9v8\" (UID: \"6f152c64-57be-4ac0-8422-d5d504f02cf8\") " pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 06:56:17 crc kubenswrapper[4707]: E1213 06:56:17.406383 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:17.906358154 +0000 UTC m=+86.403537201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.406595 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f152c64-57be-4ac0-8422-d5d504f02cf8-catalog-content\") pod \"redhat-operators-2t9v8\" (UID: \"6f152c64-57be-4ac0-8422-d5d504f02cf8\") " pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.406628 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f152c64-57be-4ac0-8422-d5d504f02cf8-utilities\") pod \"redhat-operators-2t9v8\" (UID: \"6f152c64-57be-4ac0-8422-d5d504f02cf8\") " pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.428373 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn7c4\" (UniqueName: \"kubernetes.io/projected/6f152c64-57be-4ac0-8422-d5d504f02cf8-kube-api-access-mn7c4\") pod \"redhat-operators-2t9v8\" (UID: \"6f152c64-57be-4ac0-8422-d5d504f02cf8\") " pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.507168 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:17 crc kubenswrapper[4707]: E1213 06:56:17.507332 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:18.007311769 +0000 UTC m=+86.504490826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.507458 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:17 crc kubenswrapper[4707]: E1213 06:56:17.507687 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:18.007679528 +0000 UTC m=+86.504858575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.579382 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.612766 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:17 crc kubenswrapper[4707]: E1213 06:56:17.612910 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:18.112892525 +0000 UTC m=+86.610071572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.613174 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:17 crc kubenswrapper[4707]: E1213 06:56:17.613414 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:18.113406087 +0000 UTC m=+86.610585124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.699915 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5s7zh"] Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.700882 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.715876 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:17 crc kubenswrapper[4707]: E1213 06:56:17.716013 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:18.215995631 +0000 UTC m=+86.713174678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.716098 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:17 crc kubenswrapper[4707]: E1213 06:56:17.716431 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:18.216423441 +0000 UTC m=+86.713602488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.726373 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5s7zh"] Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.817510 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.817737 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-catalog-content\") pod \"redhat-operators-5s7zh\" (UID: \"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb\") " pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.817804 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blslw\" (UniqueName: \"kubernetes.io/projected/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-kube-api-access-blslw\") pod \"redhat-operators-5s7zh\" (UID: \"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb\") " pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.817832 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-utilities\") pod \"redhat-operators-5s7zh\" (UID: \"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb\") " pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:56:17 crc kubenswrapper[4707]: E1213 06:56:17.817923 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:18.317909329 +0000 UTC m=+86.815088376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.862625 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:17 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:17 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:17 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.862672 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.919409 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blslw\" (UniqueName: \"kubernetes.io/projected/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-kube-api-access-blslw\") pod \"redhat-operators-5s7zh\" (UID: \"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb\") " pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.919446 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-utilities\") pod \"redhat-operators-5s7zh\" (UID: \"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb\") " pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.919506 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.919544 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-catalog-content\") pod \"redhat-operators-5s7zh\" (UID: \"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb\") " pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.920304 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-catalog-content\") pod \"redhat-operators-5s7zh\" (UID: \"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb\") " pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.920527 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-utilities\") pod \"redhat-operators-5s7zh\" (UID: \"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb\") " pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:56:17 crc kubenswrapper[4707]: E1213 06:56:17.920987 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:18.420976574 +0000 UTC m=+86.918155621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:17 crc kubenswrapper[4707]: I1213 06:56:17.954567 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blslw\" (UniqueName: \"kubernetes.io/projected/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-kube-api-access-blslw\") pod \"redhat-operators-5s7zh\" (UID: \"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb\") " pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.016407 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.020783 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.021236 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:18.521217502 +0000 UTC m=+87.018396549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.038694 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2t9v8"] Dec 13 06:56:18 crc kubenswrapper[4707]: W1213 06:56:18.049286 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f152c64_57be_4ac0_8422_d5d504f02cf8.slice/crio-871352def061307db1b690178e6c17d6a219b9919f2b5fefd08360a023f97c1f WatchSource:0}: Error finding container 871352def061307db1b690178e6c17d6a219b9919f2b5fefd08360a023f97c1f: Status 404 returned error can't find the container with id 871352def061307db1b690178e6c17d6a219b9919f2b5fefd08360a023f97c1f Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.077358 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2t9v8" event={"ID":"6f152c64-57be-4ac0-8422-d5d504f02cf8","Type":"ContainerStarted","Data":"871352def061307db1b690178e6c17d6a219b9919f2b5fefd08360a023f97c1f"} Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.078702 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwcjf" event={"ID":"90f36b9c-fa8d-4257-ad64-2ee8a487d037","Type":"ContainerStarted","Data":"97e2590a08b5c429793341df474df724462609f24c324a54345509a7d33c316b"} Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.125521 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.125962 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:18.625931007 +0000 UTC m=+87.123110054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.226462 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.226667 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:18.726639556 +0000 UTC m=+87.223818673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.226755 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.227356 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:18.727338433 +0000 UTC m=+87.224517480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.235283 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5s7zh"] Dec 13 06:56:18 crc kubenswrapper[4707]: W1213 06:56:18.241873 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0291ce56_de87_4ca7_b7d1_8b4b68de5cbb.slice/crio-dd312fd21d095757bb7d89e1f9ce95fa5d52accefdd525df71faf27dde63378b WatchSource:0}: Error finding container dd312fd21d095757bb7d89e1f9ce95fa5d52accefdd525df71faf27dde63378b: Status 404 returned error can't find the container with id dd312fd21d095757bb7d89e1f9ce95fa5d52accefdd525df71faf27dde63378b Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.328483 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.328851 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:18.82882293 +0000 UTC m=+87.326001987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.329001 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.329457 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:18.829440335 +0000 UTC m=+87.326619382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.424235 4707 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7ctgj container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.424299 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" podUID="ba542d52-1a53-45d0-99c7-a56d028d0b42" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.424235 4707 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-7ctgj container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.424344 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" podUID="ba542d52-1a53-45d0-99c7-a56d028d0b42" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.429656 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.429741 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:18.929723204 +0000 UTC m=+87.426902251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.430266 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.430637 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:18.930627235 +0000 UTC m=+87.427806292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.531867 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.532228 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.032197845 +0000 UTC m=+87.529376902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.533635 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.533992 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.033976218 +0000 UTC m=+87.531155265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.635187 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.635447 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.135413204 +0000 UTC m=+87.632592291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.635770 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.636393 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.136377597 +0000 UTC m=+87.633556684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.737009 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.737156 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.237135236 +0000 UTC m=+87.734314303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.737634 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.738072 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.238054498 +0000 UTC m=+87.735233545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.854364 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.854611 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.354592405 +0000 UTC m=+87.851771452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.854821 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.855141 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.355132947 +0000 UTC m=+87.852311994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.867120 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:18 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:18 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:18 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.867170 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.955928 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.956119 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.456088293 +0000 UTC m=+87.953267340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:18 crc kubenswrapper[4707]: I1213 06:56:18.956198 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:18 crc kubenswrapper[4707]: E1213 06:56:18.956559 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.456549724 +0000 UTC m=+87.953728881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.056986 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:19 crc kubenswrapper[4707]: E1213 06:56:19.057167 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.55714271 +0000 UTC m=+88.054321767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.057559 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:19 crc kubenswrapper[4707]: E1213 06:56:19.057881 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.557869407 +0000 UTC m=+88.055048454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.082477 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5s7zh" event={"ID":"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb","Type":"ContainerStarted","Data":"dd312fd21d095757bb7d89e1f9ce95fa5d52accefdd525df71faf27dde63378b"} Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.159028 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:19 crc kubenswrapper[4707]: E1213 06:56:19.159160 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.65913774 +0000 UTC m=+88.156316797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.159382 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:19 crc kubenswrapper[4707]: E1213 06:56:19.159749 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.659738374 +0000 UTC m=+88.156917421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.259830 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:19 crc kubenswrapper[4707]: E1213 06:56:19.260275 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.760262079 +0000 UTC m=+88.257441116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.361285 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:19 crc kubenswrapper[4707]: E1213 06:56:19.361659 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.861647924 +0000 UTC m=+88.358826971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.462784 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.462970 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.463003 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.463063 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.463106 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:56:19 crc kubenswrapper[4707]: E1213 06:56:19.463877 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:19.963849359 +0000 UTC m=+88.461028406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.469674 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.470059 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.470243 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.564048 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:19 crc kubenswrapper[4707]: E1213 06:56:19.564394 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:20.064377954 +0000 UTC m=+88.561557001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.569612 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.577467 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.679270 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:19 crc kubenswrapper[4707]: E1213 06:56:19.680870 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:20.180853689 +0000 UTC m=+88.678032736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.782211 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:19 crc kubenswrapper[4707]: E1213 06:56:19.782602 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:20.282588692 +0000 UTC m=+88.779767739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.863356 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:19 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:19 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:19 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.863421 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:19 crc kubenswrapper[4707]: W1213 06:56:19.874555 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-5d2946a343ec2ebc47d70f86aba2b7ff87d3e4a4a89d03742cb80086df2cfe41 WatchSource:0}: Error finding container 5d2946a343ec2ebc47d70f86aba2b7ff87d3e4a4a89d03742cb80086df2cfe41: Status 404 returned error can't find the container with id 5d2946a343ec2ebc47d70f86aba2b7ff87d3e4a4a89d03742cb80086df2cfe41 Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.884632 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:19 crc kubenswrapper[4707]: E1213 06:56:19.884840 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:20.384812047 +0000 UTC m=+88.881991094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.885109 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:19 crc kubenswrapper[4707]: E1213 06:56:19.885372 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:20.385365111 +0000 UTC m=+88.882544158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:19 crc kubenswrapper[4707]: I1213 06:56:19.986378 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:19 crc kubenswrapper[4707]: E1213 06:56:19.986648 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:20.486634133 +0000 UTC m=+88.983813180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.086945 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5d2946a343ec2ebc47d70f86aba2b7ff87d3e4a4a89d03742cb80086df2cfe41"} Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.087332 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:20 crc kubenswrapper[4707]: E1213 06:56:20.087604 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:20.587590808 +0000 UTC m=+89.084769855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.088657 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fp5b9" event={"ID":"f748007d-42fd-4882-9ead-51d2c65b5373","Type":"ContainerStarted","Data":"738dd86dc19f77c0513eaf51efe283c66767135aad5993f8cfa529f86d845171"} Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.089422 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b77cd5fdc0788012887d8e9fdada7c068410c74db6c6371a1cae536175da28fb"} Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.188624 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:20 crc kubenswrapper[4707]: E1213 06:56:20.188823 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:20.688783659 +0000 UTC m=+89.185962736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.188906 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:20 crc kubenswrapper[4707]: E1213 06:56:20.189194 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:20.689182758 +0000 UTC m=+89.186361805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.290402 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:20 crc kubenswrapper[4707]: E1213 06:56:20.290549 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:20.790524673 +0000 UTC m=+89.287703720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.290594 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:20 crc kubenswrapper[4707]: E1213 06:56:20.290912 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:20.790899562 +0000 UTC m=+89.288078609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.391364 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:20 crc kubenswrapper[4707]: E1213 06:56:20.391527 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:20.891494918 +0000 UTC m=+89.388673965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.391581 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:20 crc kubenswrapper[4707]: E1213 06:56:20.392049 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:20.892038121 +0000 UTC m=+89.389217168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.493157 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:20 crc kubenswrapper[4707]: E1213 06:56:20.493461 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:20.993445917 +0000 UTC m=+89.490624964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.594624 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:20 crc kubenswrapper[4707]: E1213 06:56:20.595828 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:21.095816766 +0000 UTC m=+89.592995813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.696588 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:20 crc kubenswrapper[4707]: E1213 06:56:20.697102 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:21.197082298 +0000 UTC m=+89.694261355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.798235 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:20 crc kubenswrapper[4707]: E1213 06:56:20.798593 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:21.298551135 +0000 UTC m=+89.795730182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.862181 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:20 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:20 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:20 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.862275 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.900013 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:20 crc kubenswrapper[4707]: E1213 06:56:20.900501 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:21.400473793 +0000 UTC m=+89.897652880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:20 crc kubenswrapper[4707]: I1213 06:56:20.900586 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:20 crc kubenswrapper[4707]: E1213 06:56:20.901126 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:21.401106819 +0000 UTC m=+89.898285906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.001573 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:21 crc kubenswrapper[4707]: E1213 06:56:21.001708 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:21.501691685 +0000 UTC m=+89.998870732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.002160 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:21 crc kubenswrapper[4707]: E1213 06:56:21.002442 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:21.502432012 +0000 UTC m=+89.999611059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.104129 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:21 crc kubenswrapper[4707]: E1213 06:56:21.104695 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:21.604669228 +0000 UTC m=+90.101848325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.206244 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:21 crc kubenswrapper[4707]: E1213 06:56:21.206620 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:21.706601226 +0000 UTC m=+90.203780283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.307919 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:21 crc kubenswrapper[4707]: E1213 06:56:21.308143 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:21.808101274 +0000 UTC m=+90.305280351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.308643 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:21 crc kubenswrapper[4707]: E1213 06:56:21.309131 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:21.809112618 +0000 UTC m=+90.306291695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.409493 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:21 crc kubenswrapper[4707]: E1213 06:56:21.409687 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:21.909649984 +0000 UTC m=+90.406829071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:21 crc kubenswrapper[4707]: E1213 06:56:21.410309 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:21.910288369 +0000 UTC m=+90.407467416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.410856 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.512332 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:21 crc kubenswrapper[4707]: E1213 06:56:21.512661 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:22.012646107 +0000 UTC m=+90.509825154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.614770 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:21 crc kubenswrapper[4707]: E1213 06:56:21.615238 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:22.115217451 +0000 UTC m=+90.612396598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.617825 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.658773 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.714208 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-7ctgj" Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.715274 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:21 crc kubenswrapper[4707]: E1213 06:56:21.715421 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:22.215400967 +0000 UTC m=+90.712580014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.715548 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:21 crc kubenswrapper[4707]: E1213 06:56:21.715982 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:22.21594709 +0000 UTC m=+90.713126137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.818422 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:21 crc kubenswrapper[4707]: E1213 06:56:21.819049 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:22.319030336 +0000 UTC m=+90.816209393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.870768 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:21 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:21 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:21 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.870820 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:21 crc kubenswrapper[4707]: I1213 06:56:21.920602 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:21 crc kubenswrapper[4707]: E1213 06:56:21.921120 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:22.421106328 +0000 UTC m=+90.918285375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:22 crc kubenswrapper[4707]: I1213 06:56:22.023628 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:22 crc kubenswrapper[4707]: E1213 06:56:22.024017 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:22.524001679 +0000 UTC m=+91.021180726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:22 crc kubenswrapper[4707]: W1213 06:56:22.096254 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-a5a305a0d1964c6ca5eaf74c88b762a292633c7650311c900f3caa39ec2b4dad WatchSource:0}: Error finding container a5a305a0d1964c6ca5eaf74c88b762a292633c7650311c900f3caa39ec2b4dad: Status 404 returned error can't find the container with id a5a305a0d1964c6ca5eaf74c88b762a292633c7650311c900f3caa39ec2b4dad Dec 13 06:56:22 crc kubenswrapper[4707]: I1213 06:56:22.125724 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:22 crc kubenswrapper[4707]: E1213 06:56:22.126027 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:22.626016339 +0000 UTC m=+91.123195376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:22 crc kubenswrapper[4707]: I1213 06:56:22.227107 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:22 crc kubenswrapper[4707]: E1213 06:56:22.227638 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:22.72761289 +0000 UTC m=+91.224791977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:22 crc kubenswrapper[4707]: I1213 06:56:22.328421 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:22 crc kubenswrapper[4707]: E1213 06:56:22.328941 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:22.828915942 +0000 UTC m=+91.326095019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:22 crc kubenswrapper[4707]: I1213 06:56:22.429350 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:22 crc kubenswrapper[4707]: E1213 06:56:22.429531 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:22.929504228 +0000 UTC m=+91.426683275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:22 crc kubenswrapper[4707]: I1213 06:56:22.429668 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:22 crc kubenswrapper[4707]: E1213 06:56:22.429995 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:22.9299872 +0000 UTC m=+91.427166237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:22 crc kubenswrapper[4707]: I1213 06:56:22.531047 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:22 crc kubenswrapper[4707]: E1213 06:56:22.531255 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:23.031224812 +0000 UTC m=+91.528403869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:22 crc kubenswrapper[4707]: I1213 06:56:22.531319 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:22 crc kubenswrapper[4707]: E1213 06:56:22.531627 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:23.031616731 +0000 UTC m=+91.528795778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:22 crc kubenswrapper[4707]: I1213 06:56:22.632826 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:22 crc kubenswrapper[4707]: E1213 06:56:22.633156 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:23.13314114 +0000 UTC m=+91.630320187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:22 crc kubenswrapper[4707]: I1213 06:56:22.734619 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:22 crc kubenswrapper[4707]: E1213 06:56:22.734940 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:23.234928455 +0000 UTC m=+91.732107502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:22 crc kubenswrapper[4707]: I1213 06:56:22.835615 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:22 crc kubenswrapper[4707]: E1213 06:56:22.836130 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:23.336102205 +0000 UTC m=+91.833281252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:22 crc kubenswrapper[4707]: I1213 06:56:22.861258 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:22 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:22 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:22 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:22 crc kubenswrapper[4707]: I1213 06:56:22.861344 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:22 crc kubenswrapper[4707]: I1213 06:56:22.936877 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:22 crc kubenswrapper[4707]: E1213 06:56:22.937288 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:23.437271795 +0000 UTC m=+91.934450842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.037583 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:23 crc kubenswrapper[4707]: E1213 06:56:23.037753 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:23.537723768 +0000 UTC m=+92.034902815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.038152 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:23 crc kubenswrapper[4707]: E1213 06:56:23.038518 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:23.538503297 +0000 UTC m=+92.035682344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.114354 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a5a305a0d1964c6ca5eaf74c88b762a292633c7650311c900f3caa39ec2b4dad"} Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.139715 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:23 crc kubenswrapper[4707]: E1213 06:56:23.139923 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:23.639874472 +0000 UTC m=+92.137053519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.140078 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:23 crc kubenswrapper[4707]: E1213 06:56:23.140455 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:23.640440475 +0000 UTC m=+92.137619522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.240993 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:23 crc kubenswrapper[4707]: E1213 06:56:23.241160 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:23.741134434 +0000 UTC m=+92.238313481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.241271 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:23 crc kubenswrapper[4707]: E1213 06:56:23.241611 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:23.741603845 +0000 UTC m=+92.238782892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.342827 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:23 crc kubenswrapper[4707]: E1213 06:56:23.343000 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:23.84297724 +0000 UTC m=+92.340156287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.343219 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:23 crc kubenswrapper[4707]: E1213 06:56:23.343514 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:23.843502623 +0000 UTC m=+92.340681670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.435525 4707 patch_prober.go:28] interesting pod/console-f9d7485db-6c4h6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.435594 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6c4h6" podUID="c73c65c7-bef5-4db7-8e02-bbe2b39f1758" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.443810 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:23 crc kubenswrapper[4707]: E1213 06:56:23.444325 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:23.944295904 +0000 UTC m=+92.441474951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.546008 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:23 crc kubenswrapper[4707]: E1213 06:56:23.546507 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:24.046483838 +0000 UTC m=+92.543662925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.614331 4707 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfxc7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.614422 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dfxc7" podUID="12994b98-2a7b-461d-a7bb-9ef25135c20c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.614622 4707 patch_prober.go:28] interesting pod/downloads-7954f5f757-dfxc7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.614685 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dfxc7" podUID="12994b98-2a7b-461d-a7bb-9ef25135c20c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.646654 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:23 crc kubenswrapper[4707]: E1213 06:56:23.647048 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:24.147034784 +0000 UTC m=+92.644213831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.748472 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:23 crc kubenswrapper[4707]: E1213 06:56:23.748975 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:24.248938521 +0000 UTC m=+92.746117578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.849724 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:23 crc kubenswrapper[4707]: E1213 06:56:23.850063 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:24.35004782 +0000 UTC m=+92.847226867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.861970 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:23 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:23 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:23 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.862294 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:23 crc kubenswrapper[4707]: I1213 06:56:23.951610 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:23 crc kubenswrapper[4707]: E1213 06:56:23.951876 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:24.451861156 +0000 UTC m=+92.949040203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.053434 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.053631 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:24.553593619 +0000 UTC m=+93.050772696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.053793 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.054244 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:24.554229734 +0000 UTC m=+93.051408851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.120751 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cjjxz" event={"ID":"b1585433-7721-4bfc-ae6c-68d963b7f65b","Type":"ContainerStarted","Data":"2d8c12613d850b0c71707a0f11d16cad146200fd9c2fc3f9566ccb4959c366d8"} Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.122404 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" event={"ID":"cc0d338c-f40c-45a1-9529-0eca9141065d","Type":"ContainerStarted","Data":"1b39551f452fdb689d8ee64284ecaae8d5fcd6c17d672d969a890d9d68f449e2"} Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.155221 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.155431 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:24.655403845 +0000 UTC m=+93.152582892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.256683 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.257008 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:24.756992585 +0000 UTC m=+93.254171632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.358233 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.358403 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:24.85838185 +0000 UTC m=+93.355560897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.358518 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.358812 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:24.85880208 +0000 UTC m=+93.355981127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.459104 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.459269 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:24.959244543 +0000 UTC m=+93.456423590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.459333 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.459634 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:24.959622912 +0000 UTC m=+93.456801959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.561305 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.561531 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:25.061488769 +0000 UTC m=+93.558667816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.561761 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.562090 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:25.062080603 +0000 UTC m=+93.559259650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.596028 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.597986 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.599711 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.599780 4707 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.663420 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.663582 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:25.16355719 +0000 UTC m=+93.660736237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.664353 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.664739 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:25.164712348 +0000 UTC m=+93.661891385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.765858 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.766063 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:25.266034972 +0000 UTC m=+93.763214019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.766182 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.766434 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:25.266427461 +0000 UTC m=+93.763606508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.821426 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lf7l6" Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.839448 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hpx2l" Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.850111 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.863054 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:24 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:24 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:24 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.863140 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.867689 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.867878 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:25.367849157 +0000 UTC m=+93.865028224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.868088 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.868487 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:25.368475522 +0000 UTC m=+93.865654559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:24 crc kubenswrapper[4707]: I1213 06:56:24.968688 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:24 crc kubenswrapper[4707]: E1213 06:56:24.970079 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:25.470056922 +0000 UTC m=+93.967235969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.070715 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:25 crc kubenswrapper[4707]: E1213 06:56:25.071092 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:25.571075759 +0000 UTC m=+94.068254796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.129294 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" event={"ID":"6f8ce01d-989d-463c-8973-63d7b9fc78da","Type":"ContainerStarted","Data":"15a7fe4dc9171ed3ddc8eb6417f8735c6975988a6a8bc5f69d74a42377c6f592"} Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.171564 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:25 crc kubenswrapper[4707]: E1213 06:56:25.172661 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:25.672640508 +0000 UTC m=+94.169819555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.274059 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:25 crc kubenswrapper[4707]: E1213 06:56:25.274548 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:25.774532165 +0000 UTC m=+94.271711212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.376169 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:25 crc kubenswrapper[4707]: E1213 06:56:25.376318 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:25.87629243 +0000 UTC m=+94.373471477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.376425 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:25 crc kubenswrapper[4707]: E1213 06:56:25.376737 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:25.87673038 +0000 UTC m=+94.373909427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.477155 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:25 crc kubenswrapper[4707]: E1213 06:56:25.477337 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:25.977317416 +0000 UTC m=+94.474496463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.477462 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:25 crc kubenswrapper[4707]: E1213 06:56:25.477724 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:25.977715646 +0000 UTC m=+94.474894693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.578623 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:25 crc kubenswrapper[4707]: E1213 06:56:25.579064 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.07904723 +0000 UTC m=+94.576226277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.680030 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:25 crc kubenswrapper[4707]: E1213 06:56:25.680385 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.180373784 +0000 UTC m=+94.677552831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.681476 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.780797 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:25 crc kubenswrapper[4707]: E1213 06:56:25.781047 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.281008801 +0000 UTC m=+94.778187848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.781112 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:25 crc kubenswrapper[4707]: E1213 06:56:25.781982 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.281970254 +0000 UTC m=+94.779149311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.865535 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:25 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:25 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:25 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.865631 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.882199 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:25 crc kubenswrapper[4707]: E1213 06:56:25.882409 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.382377285 +0000 UTC m=+94.879556332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.882560 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:25 crc kubenswrapper[4707]: E1213 06:56:25.882873 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.382862807 +0000 UTC m=+94.880041854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.983721 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:25 crc kubenswrapper[4707]: E1213 06:56:25.983885 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.483861263 +0000 UTC m=+94.981040310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:25 crc kubenswrapper[4707]: I1213 06:56:25.984140 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:25 crc kubenswrapper[4707]: E1213 06:56:25.984440 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.484431986 +0000 UTC m=+94.981611023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.084729 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:26 crc kubenswrapper[4707]: E1213 06:56:26.084883 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.584865799 +0000 UTC m=+95.082044846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.085102 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:26 crc kubenswrapper[4707]: E1213 06:56:26.085414 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.585403992 +0000 UTC m=+95.082583039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.134918 4707 generic.go:334] "Generic (PLEG): container finished" podID="e5fb8205-458e-4e43-a541-9438509a3fc4" containerID="92a505d9a88e27cc46ea219e47f521856bd7a1733ece96486c954b514edbec9d" exitCode=0 Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.134983 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" event={"ID":"e5fb8205-458e-4e43-a541-9438509a3fc4","Type":"ContainerDied","Data":"92a505d9a88e27cc46ea219e47f521856bd7a1733ece96486c954b514edbec9d"} Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.137172 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t685j" event={"ID":"31a88c0f-62fb-4209-8acf-b106c6d03d6e","Type":"ContainerStarted","Data":"f983c7327702429525f69fc517a30df8cd4ca10d7bfd13c2ebe704a79e6dce9b"} Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.139498 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-kws2d" event={"ID":"93186f0d-0a67-42db-b750-a02749ab9f9a","Type":"ContainerStarted","Data":"5b15582fc10028e8ed74664156b1bab853a85c2e80d2527ae873ef4593dd7125"} Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.141121 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8" event={"ID":"910e37ce-87c8-4083-a021-e91516eaa546","Type":"ContainerStarted","Data":"81060ce07cd8eddaa256a764bcc073028972a800f64196b57dff095d63b039e2"} Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.186363 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:26 crc kubenswrapper[4707]: E1213 06:56:26.186550 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.686523561 +0000 UTC m=+95.183702608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.186734 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:26 crc kubenswrapper[4707]: E1213 06:56:26.187196 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.687184737 +0000 UTC m=+95.184363854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.288255 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:26 crc kubenswrapper[4707]: E1213 06:56:26.288435 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.788413288 +0000 UTC m=+95.285592335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.288512 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:26 crc kubenswrapper[4707]: E1213 06:56:26.288797 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.788787387 +0000 UTC m=+95.285966434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.389519 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:26 crc kubenswrapper[4707]: E1213 06:56:26.389824 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.889778223 +0000 UTC m=+95.386957270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.390263 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:26 crc kubenswrapper[4707]: E1213 06:56:26.390549 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.890534311 +0000 UTC m=+95.387713358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.491994 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:26 crc kubenswrapper[4707]: E1213 06:56:26.492215 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.992185343 +0000 UTC m=+95.489364390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.492343 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:26 crc kubenswrapper[4707]: E1213 06:56:26.492733 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:26.992718995 +0000 UTC m=+95.489898042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.593602 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:26 crc kubenswrapper[4707]: E1213 06:56:26.593760 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:27.093729602 +0000 UTC m=+95.590908649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.593821 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:26 crc kubenswrapper[4707]: E1213 06:56:26.594187 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:27.094173722 +0000 UTC m=+95.591352839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.695448 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:26 crc kubenswrapper[4707]: E1213 06:56:26.695615 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:27.195587918 +0000 UTC m=+95.692766965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.695849 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:26 crc kubenswrapper[4707]: E1213 06:56:26.696202 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:27.196186533 +0000 UTC m=+95.693365600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.796765 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:26 crc kubenswrapper[4707]: E1213 06:56:26.797266 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:27.29724565 +0000 UTC m=+95.794424697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.861181 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:26 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:26 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:26 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.861268 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:26 crc kubenswrapper[4707]: I1213 06:56:26.898710 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:26 crc kubenswrapper[4707]: E1213 06:56:26.899214 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:27.399189029 +0000 UTC m=+95.896368116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:26.999849 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:27 crc kubenswrapper[4707]: E1213 06:56:27.000306 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:27.500287717 +0000 UTC m=+95.997466774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.101549 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:27 crc kubenswrapper[4707]: E1213 06:56:27.101864 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:27.601847657 +0000 UTC m=+96.099026694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.149039 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" event={"ID":"44c42d40-af3b-4fba-b43c-042218dfd0ef","Type":"ContainerStarted","Data":"e125e8452363f5a36f3ea70113f620f36ae2225dd5e5f242a60dca0277524f61"} Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.202429 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:27 crc kubenswrapper[4707]: E1213 06:56:27.202590 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:27.702565406 +0000 UTC m=+96.199744453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.202847 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:27 crc kubenswrapper[4707]: E1213 06:56:27.203157 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:27.703149 +0000 UTC m=+96.200328047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.304104 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:27 crc kubenswrapper[4707]: E1213 06:56:27.304420 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:27.804404712 +0000 UTC m=+96.301583759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.405597 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:27 crc kubenswrapper[4707]: E1213 06:56:27.406456 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:27.906441083 +0000 UTC m=+96.403620130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.408984 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.507341 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5fb8205-458e-4e43-a541-9438509a3fc4-config-volume\") pod \"e5fb8205-458e-4e43-a541-9438509a3fc4\" (UID: \"e5fb8205-458e-4e43-a541-9438509a3fc4\") " Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.507452 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5fb8205-458e-4e43-a541-9438509a3fc4-secret-volume\") pod \"e5fb8205-458e-4e43-a541-9438509a3fc4\" (UID: \"e5fb8205-458e-4e43-a541-9438509a3fc4\") " Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.507507 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vrp6\" (UniqueName: \"kubernetes.io/projected/e5fb8205-458e-4e43-a541-9438509a3fc4-kube-api-access-2vrp6\") pod \"e5fb8205-458e-4e43-a541-9438509a3fc4\" (UID: \"e5fb8205-458e-4e43-a541-9438509a3fc4\") " Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.507592 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:27 crc kubenswrapper[4707]: E1213 06:56:27.507858 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:28.007842319 +0000 UTC m=+96.505021366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.508037 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5fb8205-458e-4e43-a541-9438509a3fc4-config-volume" (OuterVolumeSpecName: "config-volume") pod "e5fb8205-458e-4e43-a541-9438509a3fc4" (UID: "e5fb8205-458e-4e43-a541-9438509a3fc4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.513311 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5fb8205-458e-4e43-a541-9438509a3fc4-kube-api-access-2vrp6" (OuterVolumeSpecName: "kube-api-access-2vrp6") pod "e5fb8205-458e-4e43-a541-9438509a3fc4" (UID: "e5fb8205-458e-4e43-a541-9438509a3fc4"). InnerVolumeSpecName "kube-api-access-2vrp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.513537 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fb8205-458e-4e43-a541-9438509a3fc4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e5fb8205-458e-4e43-a541-9438509a3fc4" (UID: "e5fb8205-458e-4e43-a541-9438509a3fc4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.608563 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.608662 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vrp6\" (UniqueName: \"kubernetes.io/projected/e5fb8205-458e-4e43-a541-9438509a3fc4-kube-api-access-2vrp6\") on node \"crc\" DevicePath \"\"" Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.608681 4707 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5fb8205-458e-4e43-a541-9438509a3fc4-config-volume\") on node \"crc\" DevicePath \"\"" Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.608694 4707 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5fb8205-458e-4e43-a541-9438509a3fc4-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 13 06:56:27 crc kubenswrapper[4707]: E1213 06:56:27.608995 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:28.108974688 +0000 UTC m=+96.606153755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.709414 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:27 crc kubenswrapper[4707]: E1213 06:56:27.709945 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:28.209921053 +0000 UTC m=+96.707100140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.811670 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:27 crc kubenswrapper[4707]: E1213 06:56:27.812268 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:28.31225314 +0000 UTC m=+96.809432197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.862406 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:27 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:27 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:27 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.862884 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.913246 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:27 crc kubenswrapper[4707]: E1213 06:56:27.913462 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:28.413430851 +0000 UTC m=+96.910609908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:27 crc kubenswrapper[4707]: I1213 06:56:27.913511 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:27 crc kubenswrapper[4707]: E1213 06:56:27.914068 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:28.414061366 +0000 UTC m=+96.911240413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.014561 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:28 crc kubenswrapper[4707]: E1213 06:56:28.014707 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:28.514681453 +0000 UTC m=+97.011860500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.015187 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:28 crc kubenswrapper[4707]: E1213 06:56:28.015546 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:28.515533523 +0000 UTC m=+97.012712570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.116481 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:28 crc kubenswrapper[4707]: E1213 06:56:28.116908 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:28.616881508 +0000 UTC m=+97.114060565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.153291 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f3b21338-6240-404f-af83-ffed1f5f3fe9","Type":"ContainerStarted","Data":"349c8162677183d852acfe5ce05921124984340fae1095b20a5757cfde77e426"} Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.154911 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" event={"ID":"93832597-9fae-41f4-8228-f3bf3e2ecd88","Type":"ContainerStarted","Data":"888c875c78933fc7da3cf92dad4bb1cc48e21411207bea8bde413a8f709ff984"} Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.156278 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" event={"ID":"e5fb8205-458e-4e43-a541-9438509a3fc4","Type":"ContainerDied","Data":"120a36ef2aa10e9bb5139d2d7cfb93ee282679996b8ef089d9fadf2553cdb060"} Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.156289 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426805-cpw2x" Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.156316 4707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="120a36ef2aa10e9bb5139d2d7cfb93ee282679996b8ef089d9fadf2553cdb060" Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.159721 4707 generic.go:334] "Generic (PLEG): container finished" podID="31a88c0f-62fb-4209-8acf-b106c6d03d6e" containerID="f983c7327702429525f69fc517a30df8cd4ca10d7bfd13c2ebe704a79e6dce9b" exitCode=0 Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.159767 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t685j" event={"ID":"31a88c0f-62fb-4209-8acf-b106c6d03d6e","Type":"ContainerDied","Data":"f983c7327702429525f69fc517a30df8cd4ca10d7bfd13c2ebe704a79e6dce9b"} Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.161557 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f00213e0-4841-456c-8c78-2ab1692951fb","Type":"ContainerStarted","Data":"addbf9ada5ce1e48d355beb275344fcda4512ca0674e08b85318245a4f611dae"} Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.163577 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2hk9" event={"ID":"98c17acd-c234-4d92-855e-313cde4274cb","Type":"ContainerStarted","Data":"783eb79bf9f25ef942412e10abb69a6080521a4c991dd4f2d2c09bc2821b1e69"} Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.165936 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z47zb" event={"ID":"1c15410b-e411-44d7-8663-d19dfc2a76b6","Type":"ContainerStarted","Data":"ade6202a39e82517cb363379d8dbfcb3d1c516a124c15563df88972cdd3be9a2"} Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.167560 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhlhc" event={"ID":"5a4673eb-3880-4df1-b8d3-95f31c7af7dd","Type":"ContainerStarted","Data":"0171a22af15900257d8d8acb8390afd5657994da8d17605c213093ca218da1bd"} Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.168810 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkw57" event={"ID":"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64","Type":"ContainerStarted","Data":"7d9025b141ae3f90bef335d2ea4bf47329339720401a51c0aa50d905e069c19e"} Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.183434 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ckpvk" podStartSLOduration=73.183416363 podStartE2EDuration="1m13.183416363s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:28.182015259 +0000 UTC m=+96.679194306" watchObservedRunningTime="2025-12-13 06:56:28.183416363 +0000 UTC m=+96.680595410" Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.217699 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:28 crc kubenswrapper[4707]: E1213 06:56:28.218183 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:28.718163821 +0000 UTC m=+97.215342948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.318514 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:28 crc kubenswrapper[4707]: E1213 06:56:28.318731 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:28.818709436 +0000 UTC m=+97.315888483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.318816 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:28 crc kubenswrapper[4707]: E1213 06:56:28.319180 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:28.819170067 +0000 UTC m=+97.316349114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.419445 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:28 crc kubenswrapper[4707]: E1213 06:56:28.419622 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:28.919599349 +0000 UTC m=+97.416778406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.419768 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:28 crc kubenswrapper[4707]: E1213 06:56:28.420162 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:28.920150713 +0000 UTC m=+97.417329760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.528245 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:28 crc kubenswrapper[4707]: E1213 06:56:28.528698 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:29.028679548 +0000 UTC m=+97.525858605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.630371 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:28 crc kubenswrapper[4707]: E1213 06:56:28.630692 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:29.130664438 +0000 UTC m=+97.627843485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.731619 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:28 crc kubenswrapper[4707]: E1213 06:56:28.731776 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:29.231757986 +0000 UTC m=+97.728937033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.731866 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:28 crc kubenswrapper[4707]: E1213 06:56:28.732327 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:29.232315729 +0000 UTC m=+97.729494776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.832646 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:28 crc kubenswrapper[4707]: E1213 06:56:28.832775 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:29.332758972 +0000 UTC m=+97.829938019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.832965 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:28 crc kubenswrapper[4707]: E1213 06:56:28.833219 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:29.333209953 +0000 UTC m=+97.830389000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.860648 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:28 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:28 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:28 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.860692 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.934775 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:28 crc kubenswrapper[4707]: E1213 06:56:28.934915 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:29.434897975 +0000 UTC m=+97.932077022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:28 crc kubenswrapper[4707]: I1213 06:56:28.935052 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:28 crc kubenswrapper[4707]: E1213 06:56:28.935319 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:29.435311905 +0000 UTC m=+97.932490952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.036516 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:29 crc kubenswrapper[4707]: E1213 06:56:29.037396 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:29.537375207 +0000 UTC m=+98.034554264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.138997 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:29 crc kubenswrapper[4707]: E1213 06:56:29.139371 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:29.639355326 +0000 UTC m=+98.136534393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.174183 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" event={"ID":"836ef8b4-4ea3-44d4-a07a-724ea1940c40","Type":"ContainerStarted","Data":"77df0f04413d02baeb47b9464ee6547549ce2a0839f1f40ff5416728da993152"} Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.175794 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75wd8" event={"ID":"fb1a1a41-b86e-4de7-af57-7862986b3037","Type":"ContainerStarted","Data":"65a1fbadf1f651b71f7a52a2f4de9cc6875e886a8912334e52a669372849b9d1"} Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.239825 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:29 crc kubenswrapper[4707]: E1213 06:56:29.240041 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:29.740015934 +0000 UTC m=+98.237194981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.240256 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:29 crc kubenswrapper[4707]: E1213 06:56:29.240566 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:29.740554097 +0000 UTC m=+98.237733144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.341996 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:29 crc kubenswrapper[4707]: E1213 06:56:29.342184 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:29.842152587 +0000 UTC m=+98.339331634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.342334 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:29 crc kubenswrapper[4707]: E1213 06:56:29.342667 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:29.842651829 +0000 UTC m=+98.339830876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.443715 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:29 crc kubenswrapper[4707]: E1213 06:56:29.443882 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:29.943851499 +0000 UTC m=+98.441030556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.444113 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:29 crc kubenswrapper[4707]: E1213 06:56:29.444398 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:29.944389742 +0000 UTC m=+98.441568789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.545666 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:29 crc kubenswrapper[4707]: E1213 06:56:29.545845 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.045825128 +0000 UTC m=+98.543004175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.545883 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:29 crc kubenswrapper[4707]: E1213 06:56:29.546217 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.046205927 +0000 UTC m=+98.543384974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.647214 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:29 crc kubenswrapper[4707]: E1213 06:56:29.647654 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.147635384 +0000 UTC m=+98.644814431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.748591 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:29 crc kubenswrapper[4707]: E1213 06:56:29.748934 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.248917887 +0000 UTC m=+98.746096934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.849256 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:29 crc kubenswrapper[4707]: E1213 06:56:29.849466 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.349440191 +0000 UTC m=+98.846619238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.849889 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:29 crc kubenswrapper[4707]: E1213 06:56:29.850196 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.350188279 +0000 UTC m=+98.847367326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.861424 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:29 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:29 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:29 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.861469 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.950623 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:29 crc kubenswrapper[4707]: E1213 06:56:29.950917 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.450872968 +0000 UTC m=+98.948052055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:29 crc kubenswrapper[4707]: I1213 06:56:29.951054 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:29 crc kubenswrapper[4707]: E1213 06:56:29.951328 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.451314978 +0000 UTC m=+98.948494015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.052090 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.052221 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.552198372 +0000 UTC m=+99.049377419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.052277 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.052566 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.55255774 +0000 UTC m=+99.049736787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.154205 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.154384 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.654360486 +0000 UTC m=+99.151539533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.154524 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.154806 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.654798986 +0000 UTC m=+99.151978033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.181100 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwcjf" event={"ID":"90f36b9c-fa8d-4257-ad64-2ee8a487d037","Type":"ContainerStarted","Data":"788c9ba865a026fc932837a0d7211cc5afe28cd3edf1944b883939b13321fe2f"} Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.182852 4707 generic.go:334] "Generic (PLEG): container finished" podID="bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" containerID="7d9025b141ae3f90bef335d2ea4bf47329339720401a51c0aa50d905e069c19e" exitCode=0 Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.182941 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkw57" event={"ID":"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64","Type":"ContainerDied","Data":"7d9025b141ae3f90bef335d2ea4bf47329339720401a51c0aa50d905e069c19e"} Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.184568 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5s7zh" event={"ID":"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb","Type":"ContainerStarted","Data":"90004ab6611fcc2477b99239365cd3c417d9cb8efdac15b5eb7f204d5185639e"} Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.186085 4707 generic.go:334] "Generic (PLEG): container finished" podID="1c15410b-e411-44d7-8663-d19dfc2a76b6" containerID="ade6202a39e82517cb363379d8dbfcb3d1c516a124c15563df88972cdd3be9a2" exitCode=0 Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.186164 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z47zb" event={"ID":"1c15410b-e411-44d7-8663-d19dfc2a76b6","Type":"ContainerDied","Data":"ade6202a39e82517cb363379d8dbfcb3d1c516a124c15563df88972cdd3be9a2"} Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.187475 4707 generic.go:334] "Generic (PLEG): container finished" podID="5a4673eb-3880-4df1-b8d3-95f31c7af7dd" containerID="0171a22af15900257d8d8acb8390afd5657994da8d17605c213093ca218da1bd" exitCode=0 Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.187551 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhlhc" event={"ID":"5a4673eb-3880-4df1-b8d3-95f31c7af7dd","Type":"ContainerDied","Data":"0171a22af15900257d8d8acb8390afd5657994da8d17605c213093ca218da1bd"} Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.188701 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2t9v8" event={"ID":"6f152c64-57be-4ac0-8422-d5d504f02cf8","Type":"ContainerStarted","Data":"69503d6e4404c76713bfef28150f565c6ce93264f6d99605eedbba3527d1dc92"} Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.191200 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw" event={"ID":"04a68c01-2285-41f7-830f-896a194c343d","Type":"ContainerStarted","Data":"461e41de56ed89eb44e98e807c296c85cadbec1a2826e62fb07b3aa556a273b4"} Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.208714 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-fp5b9" podStartSLOduration=75.20867828 podStartE2EDuration="1m15.20867828s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:29.198650499 +0000 UTC m=+97.695829556" watchObservedRunningTime="2025-12-13 06:56:30.20867828 +0000 UTC m=+98.705857377" Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.212764 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjpzg" podStartSLOduration=75.212734826 podStartE2EDuration="1m15.212734826s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:30.205560735 +0000 UTC m=+98.702739782" watchObservedRunningTime="2025-12-13 06:56:30.212734826 +0000 UTC m=+98.709913923" Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.255133 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.255329 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.75530553 +0000 UTC m=+99.252484577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.255588 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.255899 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.755890794 +0000 UTC m=+99.253069841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.356922 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.357282 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.857255369 +0000 UTC m=+99.354434416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.358185 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.358467 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.858455158 +0000 UTC m=+99.355634205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.459197 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.459427 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.959382852 +0000 UTC m=+99.456561949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.459592 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.460018 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:30.960001337 +0000 UTC m=+99.457180404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.561322 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.561541 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.061500895 +0000 UTC m=+99.558679982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.561776 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.562318 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.062301194 +0000 UTC m=+99.559480281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.662695 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.662981 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.162928451 +0000 UTC m=+99.660107508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.663041 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.663360 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.163345651 +0000 UTC m=+99.660524698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.764263 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.764469 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.26444533 +0000 UTC m=+99.761624367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.764752 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.765068 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.265061264 +0000 UTC m=+99.762240311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.861572 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:30 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:30 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:30 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.861622 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.865722 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.866017 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.365995529 +0000 UTC m=+99.863174596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:30 crc kubenswrapper[4707]: I1213 06:56:30.967370 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:30 crc kubenswrapper[4707]: E1213 06:56:30.967740 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.467722312 +0000 UTC m=+99.964901369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.068812 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.068907 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.568888622 +0000 UTC m=+100.066067669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.069060 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.069367 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.569359324 +0000 UTC m=+100.066538371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.170690 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.170866 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.670842781 +0000 UTC m=+100.168021838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.170916 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.171243 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.67123235 +0000 UTC m=+100.168411397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.272545 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.272714 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.772689997 +0000 UTC m=+100.269869044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.272884 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.273204 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.77319741 +0000 UTC m=+100.270376457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.374245 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.374448 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.874411711 +0000 UTC m=+100.371590798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.374595 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.375023 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.875007715 +0000 UTC m=+100.372186802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.475968 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.476158 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.976136044 +0000 UTC m=+100.473315091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.476272 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.476586 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:31.976575955 +0000 UTC m=+100.473755002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.577043 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.577144 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:32.07712394 +0000 UTC m=+100.574302997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.577299 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.577631 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:32.077621392 +0000 UTC m=+100.574800439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.678602 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.678790 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:32.178763801 +0000 UTC m=+100.675942848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.678939 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.679297 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:32.179287254 +0000 UTC m=+100.676466381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.711764 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52v2z"] Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.712189 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" podUID="45efad7b-cca0-4262-8c64-74d2cbbbfe5b" containerName="controller-manager" containerID="cri-o://bcba363903b448249efa342e501a78fb96653edc1f7f19fa46cca0422d8ad102" gracePeriod=30 Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.759659 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9"] Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.759856 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" podUID="4b01406f-154d-4f81-8371-f998b19039cd" containerName="route-controller-manager" containerID="cri-o://385ed48d586b2582cd6d2af95e7aeb15a949a73d10569581b4c41ea56811eea8" gracePeriod=30 Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.780507 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.780835 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:32.280820493 +0000 UTC m=+100.777999540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.862005 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:31 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:31 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:31 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.862077 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.882036 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.882443 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:32.382427173 +0000 UTC m=+100.879606220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:31 crc kubenswrapper[4707]: I1213 06:56:31.982923 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:31 crc kubenswrapper[4707]: E1213 06:56:31.983196 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:32.483181273 +0000 UTC m=+100.980360310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.084570 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:32 crc kubenswrapper[4707]: E1213 06:56:32.085173 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:32.585155783 +0000 UTC m=+101.082334830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.185206 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:32 crc kubenswrapper[4707]: E1213 06:56:32.185351 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:32.685331099 +0000 UTC m=+101.182510146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.185388 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:32 crc kubenswrapper[4707]: E1213 06:56:32.185699 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:32.685689458 +0000 UTC m=+101.182868505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.202113 4707 generic.go:334] "Generic (PLEG): container finished" podID="fb1a1a41-b86e-4de7-af57-7862986b3037" containerID="65a1fbadf1f651b71f7a52a2f4de9cc6875e886a8912334e52a669372849b9d1" exitCode=0 Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.202195 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75wd8" event={"ID":"fb1a1a41-b86e-4de7-af57-7862986b3037","Type":"ContainerDied","Data":"65a1fbadf1f651b71f7a52a2f4de9cc6875e886a8912334e52a669372849b9d1"} Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.203604 4707 generic.go:334] "Generic (PLEG): container finished" podID="0291ce56-de87-4ca7-b7d1-8b4b68de5cbb" containerID="90004ab6611fcc2477b99239365cd3c417d9cb8efdac15b5eb7f204d5185639e" exitCode=0 Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.203669 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5s7zh" event={"ID":"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb","Type":"ContainerDied","Data":"90004ab6611fcc2477b99239365cd3c417d9cb8efdac15b5eb7f204d5185639e"} Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.286592 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:32 crc kubenswrapper[4707]: E1213 06:56:32.286824 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:32.786794366 +0000 UTC m=+101.283973413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.287332 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:32 crc kubenswrapper[4707]: E1213 06:56:32.287981 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:32.787917923 +0000 UTC m=+101.285097010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.388247 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:32 crc kubenswrapper[4707]: E1213 06:56:32.388401 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:32.888375836 +0000 UTC m=+101.385554883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.388747 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:32 crc kubenswrapper[4707]: E1213 06:56:32.389087 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:32.889078353 +0000 UTC m=+101.386257400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.489941 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:32 crc kubenswrapper[4707]: E1213 06:56:32.490081 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:32.990055959 +0000 UTC m=+101.487235006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.490264 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:32 crc kubenswrapper[4707]: E1213 06:56:32.490546 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:32.99053323 +0000 UTC m=+101.487712277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.592077 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:32 crc kubenswrapper[4707]: E1213 06:56:32.592428 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.092411657 +0000 UTC m=+101.589590704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.693193 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:32 crc kubenswrapper[4707]: E1213 06:56:32.693550 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.193531956 +0000 UTC m=+101.690711023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.794523 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:32 crc kubenswrapper[4707]: E1213 06:56:32.794846 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.294778638 +0000 UTC m=+101.791957735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.795023 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:32 crc kubenswrapper[4707]: E1213 06:56:32.795487 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.295470605 +0000 UTC m=+101.792649692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.863219 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:32 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:32 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:32 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.863623 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.895831 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:32 crc kubenswrapper[4707]: E1213 06:56:32.896065 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.39603997 +0000 UTC m=+101.893219027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:32 crc kubenswrapper[4707]: I1213 06:56:32.896547 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:32 crc kubenswrapper[4707]: E1213 06:56:32.896910 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.396899371 +0000 UTC m=+101.894078428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.000690 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.000993 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.500923289 +0000 UTC m=+101.998102406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.001725 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.002379 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.502354313 +0000 UTC m=+101.999533410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.104059 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.104352 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.604308951 +0000 UTC m=+102.101488038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.104473 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.105055 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.605030718 +0000 UTC m=+102.102209825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.207061 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.706988807 +0000 UTC m=+102.204167864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.207148 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.208298 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.208373 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs\") pod \"network-metrics-daemon-xbx52\" (UID: \"cbd5a4ca-21f2-414a-a4c4-441959130e09\") " pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.208638 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.708620886 +0000 UTC m=+102.205799933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.211337 4707 generic.go:334] "Generic (PLEG): container finished" podID="6f152c64-57be-4ac0-8422-d5d504f02cf8" containerID="69503d6e4404c76713bfef28150f565c6ce93264f6d99605eedbba3527d1dc92" exitCode=0 Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.211376 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2t9v8" event={"ID":"6f152c64-57be-4ac0-8422-d5d504f02cf8","Type":"ContainerDied","Data":"69503d6e4404c76713bfef28150f565c6ce93264f6d99605eedbba3527d1dc92"} Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.216309 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbd5a4ca-21f2-414a-a4c4-441959130e09-metrics-certs\") pod \"network-metrics-daemon-xbx52\" (UID: \"cbd5a4ca-21f2-414a-a4c4-441959130e09\") " pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.309662 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.309988 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.809920559 +0000 UTC m=+102.307099636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.310074 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.310433 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.810417781 +0000 UTC m=+102.307596828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.411113 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.411322 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.911289024 +0000 UTC m=+102.408468091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.411447 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.411882 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:33.911868208 +0000 UTC m=+102.409047265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.435616 4707 patch_prober.go:28] interesting pod/console-f9d7485db-6c4h6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.435679 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6c4h6" podUID="c73c65c7-bef5-4db7-8e02-bbe2b39f1758" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.456270 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xbx52" Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.512879 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.513111 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.013082229 +0000 UTC m=+102.510261296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.513498 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.513874 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.013860528 +0000 UTC m=+102.511039595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.580862 4707 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-5vgr9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.581329 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" podUID="4b01406f-154d-4f81-8371-f998b19039cd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.614319 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.614469 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.114450094 +0000 UTC m=+102.611629141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.614566 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.615264 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.115252903 +0000 UTC m=+102.612431950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.628076 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-dfxc7" Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.682067 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xbx52"] Dec 13 06:56:33 crc kubenswrapper[4707]: W1213 06:56:33.690045 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbd5a4ca_21f2_414a_a4c4_441959130e09.slice/crio-7b0eab868a34c7d2907b2df8eabcada4e90f898671351923128877730de6e3c0 WatchSource:0}: Error finding container 7b0eab868a34c7d2907b2df8eabcada4e90f898671351923128877730de6e3c0: Status 404 returned error can't find the container with id 7b0eab868a34c7d2907b2df8eabcada4e90f898671351923128877730de6e3c0 Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.714056 4707 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-52v2z container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.714130 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" podUID="45efad7b-cca0-4262-8c64-74d2cbbbfe5b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.717532 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.717764 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.217720684 +0000 UTC m=+102.714899741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.717848 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.718415 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.218404221 +0000 UTC m=+102.715583448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.818599 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.818782 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.318755921 +0000 UTC m=+102.815934968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.818832 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.819109 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.319101139 +0000 UTC m=+102.816280186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.861258 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:33 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:33 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:33 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.861314 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:33 crc kubenswrapper[4707]: I1213 06:56:33.919599 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:33 crc kubenswrapper[4707]: E1213 06:56:33.919973 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.419935782 +0000 UTC m=+102.917114829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.024411 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.024848 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.52482824 +0000 UTC m=+103.022007307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.125919 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.126132 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.626095193 +0000 UTC m=+103.123274290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.126265 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.126828 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.62681119 +0000 UTC m=+103.123990277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.219919 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xbx52" event={"ID":"cbd5a4ca-21f2-414a-a4c4-441959130e09","Type":"ContainerStarted","Data":"7b0eab868a34c7d2907b2df8eabcada4e90f898671351923128877730de6e3c0"} Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.220365 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8" Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.227902 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.228341 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.728306948 +0000 UTC m=+103.225486035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.228467 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.228929 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.728906102 +0000 UTC m=+103.226085189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.243107 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8" podStartSLOduration=79.24307574 podStartE2EDuration="1m19.24307574s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:34.236701098 +0000 UTC m=+102.733880185" watchObservedRunningTime="2025-12-13 06:56:34.24307574 +0000 UTC m=+102.740254877" Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.258742 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-cjjxz" podStartSLOduration=79.258726353 podStartE2EDuration="1m19.258726353s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:34.25777974 +0000 UTC m=+102.754958797" watchObservedRunningTime="2025-12-13 06:56:34.258726353 +0000 UTC m=+102.755905390" Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.330387 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.330731 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.830711127 +0000 UTC m=+103.327890174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.332172 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.332567 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.832555781 +0000 UTC m=+103.329734828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.433538 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.433727 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.933702881 +0000 UTC m=+103.430881928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.433759 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.434100 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:34.93409219 +0000 UTC m=+103.431271237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.535546 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.535762 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:35.035730502 +0000 UTC m=+103.532909569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.535817 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.536343 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:35.036321766 +0000 UTC m=+103.533500843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.596048 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.598053 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.599266 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.599331 4707 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.637079 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.637709 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:35.13767618 +0000 UTC m=+103.634855267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.738520 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.738890 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:35.238874801 +0000 UTC m=+103.736053848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.839513 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.839663 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:35.339635172 +0000 UTC m=+103.836814259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.839729 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.840017 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:35.34000961 +0000 UTC m=+103.837188657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.861318 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:34 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:34 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:34 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.861384 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.940694 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.940807 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:35.440783521 +0000 UTC m=+103.937962568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:34 crc kubenswrapper[4707]: I1213 06:56:34.941169 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:34 crc kubenswrapper[4707]: E1213 06:56:34.941510 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:35.441501818 +0000 UTC m=+103.938680865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.042178 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:35 crc kubenswrapper[4707]: E1213 06:56:35.042347 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:35.54231708 +0000 UTC m=+104.039496127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.042470 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:35 crc kubenswrapper[4707]: E1213 06:56:35.042782 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:35.542769961 +0000 UTC m=+104.039949008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.143678 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:35 crc kubenswrapper[4707]: E1213 06:56:35.143843 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:35.643813528 +0000 UTC m=+104.140992605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.144192 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:35 crc kubenswrapper[4707]: E1213 06:56:35.144642 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:35.644618877 +0000 UTC m=+104.141797964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.226687 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8e97114515e03aaf642d0c0dd02dd6d8e008402eb3bbe658ad0001f280058ab1"} Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.228308 4707 generic.go:334] "Generic (PLEG): container finished" podID="90f36b9c-fa8d-4257-ad64-2ee8a487d037" containerID="788c9ba865a026fc932837a0d7211cc5afe28cd3edf1944b883939b13321fe2f" exitCode=0 Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.228384 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwcjf" event={"ID":"90f36b9c-fa8d-4257-ad64-2ee8a487d037","Type":"ContainerDied","Data":"788c9ba865a026fc932837a0d7211cc5afe28cd3edf1944b883939b13321fe2f"} Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.229611 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"00d275dc2e61cb0d82ccb520059818a53fa00e9b14c620130aac942822278f86"} Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.230869 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-kws2d" Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.234028 4707 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.234342 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-kws2d" Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.244861 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.245405 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-kws2d" podStartSLOduration=34.245387448 podStartE2EDuration="34.245387448s" podCreationTimestamp="2025-12-13 06:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:35.244371124 +0000 UTC m=+103.741550181" watchObservedRunningTime="2025-12-13 06:56:35.245387448 +0000 UTC m=+103.742566495" Dec 13 06:56:35 crc kubenswrapper[4707]: E1213 06:56:35.246319 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:35.746283029 +0000 UTC m=+104.243462106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.323543 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wrg48" podStartSLOduration=83.323521169 podStartE2EDuration="1m23.323521169s" podCreationTimestamp="2025-12-13 06:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:35.312999218 +0000 UTC m=+103.810178275" watchObservedRunningTime="2025-12-13 06:56:35.323521169 +0000 UTC m=+103.820700216" Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.348048 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:35 crc kubenswrapper[4707]: E1213 06:56:35.348417 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:35.848402102 +0000 UTC m=+104.345581159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.448893 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:35 crc kubenswrapper[4707]: E1213 06:56:35.449085 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:35.94905945 +0000 UTC m=+104.446238497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.449310 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:35 crc kubenswrapper[4707]: E1213 06:56:35.449642 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:35.949630993 +0000 UTC m=+104.446810040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.550716 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:35 crc kubenswrapper[4707]: E1213 06:56:35.550848 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:36.050823444 +0000 UTC m=+104.548002501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.551123 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:35 crc kubenswrapper[4707]: E1213 06:56:35.551428 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:36.051417068 +0000 UTC m=+104.548596125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.651883 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:35 crc kubenswrapper[4707]: E1213 06:56:35.652138 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:36.152111157 +0000 UTC m=+104.649290204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.652222 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:35 crc kubenswrapper[4707]: E1213 06:56:35.652571 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:36.152559808 +0000 UTC m=+104.649738855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.753165 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:35 crc kubenswrapper[4707]: E1213 06:56:35.753348 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:36.253307188 +0000 UTC m=+104.750486275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.753487 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:35 crc kubenswrapper[4707]: E1213 06:56:35.754069 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:36.254048676 +0000 UTC m=+104.751227763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.854667 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:35 crc kubenswrapper[4707]: E1213 06:56:35.854849 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:36.354827886 +0000 UTC m=+104.852006943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.860825 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:35 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:35 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:35 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.860908 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:35 crc kubenswrapper[4707]: I1213 06:56:35.955995 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:35 crc kubenswrapper[4707]: E1213 06:56:35.956440 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:36.456419017 +0000 UTC m=+104.953598064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.057320 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:36 crc kubenswrapper[4707]: E1213 06:56:36.057442 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:36.557419913 +0000 UTC m=+105.054598980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.057589 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:36 crc kubenswrapper[4707]: E1213 06:56:36.057880 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:36.557860413 +0000 UTC m=+105.055039470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.158280 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:36 crc kubenswrapper[4707]: E1213 06:56:36.158538 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:36.65847288 +0000 UTC m=+105.155651967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.245516 4707 generic.go:334] "Generic (PLEG): container finished" podID="4b01406f-154d-4f81-8371-f998b19039cd" containerID="385ed48d586b2582cd6d2af95e7aeb15a949a73d10569581b4c41ea56811eea8" exitCode=0 Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.245576 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" event={"ID":"4b01406f-154d-4f81-8371-f998b19039cd","Type":"ContainerDied","Data":"385ed48d586b2582cd6d2af95e7aeb15a949a73d10569581b4c41ea56811eea8"} Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.259552 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:36 crc kubenswrapper[4707]: E1213 06:56:36.259844 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:36.759832795 +0000 UTC m=+105.257011842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.360892 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:36 crc kubenswrapper[4707]: E1213 06:56:36.361062 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:36.861031126 +0000 UTC m=+105.358210223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.361193 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:36 crc kubenswrapper[4707]: E1213 06:56:36.361621 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:36.861602859 +0000 UTC m=+105.358781946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.461845 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:36 crc kubenswrapper[4707]: E1213 06:56:36.462161 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:36.962125804 +0000 UTC m=+105.459304891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.462259 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:36 crc kubenswrapper[4707]: E1213 06:56:36.462568 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:36.962556094 +0000 UTC m=+105.459735141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.563920 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:36 crc kubenswrapper[4707]: E1213 06:56:36.564039 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:37.064017982 +0000 UTC m=+105.561197029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.564336 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:36 crc kubenswrapper[4707]: E1213 06:56:36.564648 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:37.064636446 +0000 UTC m=+105.561815503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.665138 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:36 crc kubenswrapper[4707]: E1213 06:56:36.665297 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:37.165267822 +0000 UTC m=+105.662446889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.665529 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:36 crc kubenswrapper[4707]: E1213 06:56:36.665819 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:37.165808605 +0000 UTC m=+105.662987652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.767078 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:36 crc kubenswrapper[4707]: E1213 06:56:36.768019 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:37.26799 +0000 UTC m=+105.765169057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.861542 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:36 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:36 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:36 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.861602 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.869253 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:36 crc kubenswrapper[4707]: E1213 06:56:36.870622 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:37.370607104 +0000 UTC m=+105.867786151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:36 crc kubenswrapper[4707]: I1213 06:56:36.971688 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:36 crc kubenswrapper[4707]: E1213 06:56:36.972109 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:37.472089152 +0000 UTC m=+105.969268209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.072977 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.073426 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:37.573411826 +0000 UTC m=+106.070590883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.174267 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.174525 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:37.674490754 +0000 UTC m=+106.171669841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.174598 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.174965 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:37.674939774 +0000 UTC m=+106.172118821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.275712 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.275828 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:37.775809237 +0000 UTC m=+106.272988294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.276078 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.276385 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:37.776374821 +0000 UTC m=+106.273553878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.377045 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.377210 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:37.877190303 +0000 UTC m=+106.374369350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.377317 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.377605 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:37.877597632 +0000 UTC m=+106.374776679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.479241 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.479475 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:37.979445059 +0000 UTC m=+106.476624106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.479856 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.480304 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:37.980293609 +0000 UTC m=+106.477472656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.581113 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.581316 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.081289425 +0000 UTC m=+106.578468472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.581599 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.582162 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.082153635 +0000 UTC m=+106.579332682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.682970 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.683113 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.18309617 +0000 UTC m=+106.680275207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.683155 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.683418 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.183408868 +0000 UTC m=+106.680587925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.784722 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.785016 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.284981377 +0000 UTC m=+106.782160454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.785758 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.786179 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.286166246 +0000 UTC m=+106.783345293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.863050 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:37 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:37 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:37 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.863134 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.887533 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.887877 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.387828367 +0000 UTC m=+106.885007464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.887983 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.888316 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.388300549 +0000 UTC m=+106.885479606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.988790 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.988902 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.488882105 +0000 UTC m=+106.986061162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:37 crc kubenswrapper[4707]: I1213 06:56:37.989040 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:37 crc kubenswrapper[4707]: E1213 06:56:37.989417 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.489400137 +0000 UTC m=+106.986579214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.090285 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:38 crc kubenswrapper[4707]: E1213 06:56:38.090447 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.590417114 +0000 UTC m=+107.087596161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.090497 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:38 crc kubenswrapper[4707]: E1213 06:56:38.091050 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.591040309 +0000 UTC m=+107.088219356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.191111 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:38 crc kubenswrapper[4707]: E1213 06:56:38.191280 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.691253416 +0000 UTC m=+107.188432463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.191435 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:38 crc kubenswrapper[4707]: E1213 06:56:38.191721 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.691713337 +0000 UTC m=+107.188892384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.292637 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:38 crc kubenswrapper[4707]: E1213 06:56:38.292807 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.792784265 +0000 UTC m=+107.289963332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.292848 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:38 crc kubenswrapper[4707]: E1213 06:56:38.293265 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.793249466 +0000 UTC m=+107.290428523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.393904 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:38 crc kubenswrapper[4707]: E1213 06:56:38.394229 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.89418385 +0000 UTC m=+107.391362897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.394383 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:38 crc kubenswrapper[4707]: E1213 06:56:38.394780 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.894762794 +0000 UTC m=+107.391941841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.462212 4707 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.496074 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:38 crc kubenswrapper[4707]: E1213 06:56:38.496097 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.996075158 +0000 UTC m=+107.493254215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.496396 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:38 crc kubenswrapper[4707]: E1213 06:56:38.496761 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:38.996739494 +0000 UTC m=+107.493918541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.631767 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:38 crc kubenswrapper[4707]: E1213 06:56:38.632241 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:39.132209571 +0000 UTC m=+107.629388618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.733607 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:38 crc kubenswrapper[4707]: E1213 06:56:38.733997 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:39.233979715 +0000 UTC m=+107.731158762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.835013 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:38 crc kubenswrapper[4707]: E1213 06:56:38.835214 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:39.335193677 +0000 UTC m=+107.832372734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.835366 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:38 crc kubenswrapper[4707]: E1213 06:56:38.835657 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:39.335647887 +0000 UTC m=+107.832826934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.864143 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:38 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:38 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:38 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.864220 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.936182 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:38 crc kubenswrapper[4707]: E1213 06:56:38.936300 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-13 06:56:39.436273385 +0000 UTC m=+107.933452462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.936580 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:38 crc kubenswrapper[4707]: E1213 06:56:38.937023 4707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-13 06:56:39.437007692 +0000 UTC m=+107.934186759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-995mf" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.943918 4707 generic.go:334] "Generic (PLEG): container finished" podID="45efad7b-cca0-4262-8c64-74d2cbbbfe5b" containerID="bcba363903b448249efa342e501a78fb96653edc1f7f19fa46cca0422d8ad102" exitCode=0 Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.944019 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" event={"ID":"45efad7b-cca0-4262-8c64-74d2cbbbfe5b","Type":"ContainerDied","Data":"bcba363903b448249efa342e501a78fb96653edc1f7f19fa46cca0422d8ad102"} Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.945286 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"15b2cd8a0cc02f344d596f8f1c9c0f43e964e969fdddbe1c0adcfc130bbd26cc"} Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.960343 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=26.960325718 podStartE2EDuration="26.960325718s" podCreationTimestamp="2025-12-13 06:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:38.956407134 +0000 UTC m=+107.453586181" watchObservedRunningTime="2025-12-13 06:56:38.960325718 +0000 UTC m=+107.457504765" Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.988250 4707 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-13T06:56:38.462242602Z","Handler":null,"Name":""} Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.992630 4707 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 13 06:56:38 crc kubenswrapper[4707]: I1213 06:56:38.992675 4707 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.037520 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.041439 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.138807 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.278568 4707 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.278640 4707 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.407218 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-995mf\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.493208 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.521634 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-557c4f9bc6-88rnx"] Dec 13 06:56:39 crc kubenswrapper[4707]: E1213 06:56:39.521882 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45efad7b-cca0-4262-8c64-74d2cbbbfe5b" containerName="controller-manager" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.521902 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="45efad7b-cca0-4262-8c64-74d2cbbbfe5b" containerName="controller-manager" Dec 13 06:56:39 crc kubenswrapper[4707]: E1213 06:56:39.521925 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fb8205-458e-4e43-a541-9438509a3fc4" containerName="collect-profiles" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.521937 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fb8205-458e-4e43-a541-9438509a3fc4" containerName="collect-profiles" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.522133 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="45efad7b-cca0-4262-8c64-74d2cbbbfe5b" containerName="controller-manager" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.522157 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5fb8205-458e-4e43-a541-9438509a3fc4" containerName="collect-profiles" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.522615 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.536800 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-557c4f9bc6-88rnx"] Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.567265 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.646183 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-proxy-ca-bundles\") pod \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.646236 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-client-ca\") pod \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.646263 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-config\") pod \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.646377 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-serving-cert\") pod \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.646436 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdbr2\" (UniqueName: \"kubernetes.io/projected/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-kube-api-access-fdbr2\") pod \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\" (UID: \"45efad7b-cca0-4262-8c64-74d2cbbbfe5b\") " Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.646587 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-config\") pod \"controller-manager-557c4f9bc6-88rnx\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.646661 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-client-ca\") pod \"controller-manager-557c4f9bc6-88rnx\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.646713 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4a747a4-d415-41e8-be84-6f8763deeb74-serving-cert\") pod \"controller-manager-557c4f9bc6-88rnx\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.646743 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-proxy-ca-bundles\") pod \"controller-manager-557c4f9bc6-88rnx\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.646784 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zpx9\" (UniqueName: \"kubernetes.io/projected/b4a747a4-d415-41e8-be84-6f8763deeb74-kube-api-access-6zpx9\") pod \"controller-manager-557c4f9bc6-88rnx\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.646916 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-client-ca" (OuterVolumeSpecName: "client-ca") pod "45efad7b-cca0-4262-8c64-74d2cbbbfe5b" (UID: "45efad7b-cca0-4262-8c64-74d2cbbbfe5b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.646943 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "45efad7b-cca0-4262-8c64-74d2cbbbfe5b" (UID: "45efad7b-cca0-4262-8c64-74d2cbbbfe5b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.647508 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-config" (OuterVolumeSpecName: "config") pod "45efad7b-cca0-4262-8c64-74d2cbbbfe5b" (UID: "45efad7b-cca0-4262-8c64-74d2cbbbfe5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.651825 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "45efad7b-cca0-4262-8c64-74d2cbbbfe5b" (UID: "45efad7b-cca0-4262-8c64-74d2cbbbfe5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.651841 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-kube-api-access-fdbr2" (OuterVolumeSpecName: "kube-api-access-fdbr2") pod "45efad7b-cca0-4262-8c64-74d2cbbbfe5b" (UID: "45efad7b-cca0-4262-8c64-74d2cbbbfe5b"). InnerVolumeSpecName "kube-api-access-fdbr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.747496 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-client-ca\") pod \"controller-manager-557c4f9bc6-88rnx\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.747546 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4a747a4-d415-41e8-be84-6f8763deeb74-serving-cert\") pod \"controller-manager-557c4f9bc6-88rnx\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.747564 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-proxy-ca-bundles\") pod \"controller-manager-557c4f9bc6-88rnx\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.747832 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zpx9\" (UniqueName: \"kubernetes.io/projected/b4a747a4-d415-41e8-be84-6f8763deeb74-kube-api-access-6zpx9\") pod \"controller-manager-557c4f9bc6-88rnx\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.747866 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-config\") pod \"controller-manager-557c4f9bc6-88rnx\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.747907 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdbr2\" (UniqueName: \"kubernetes.io/projected/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-kube-api-access-fdbr2\") on node \"crc\" DevicePath \"\"" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.747933 4707 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.747991 4707 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.748006 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.748019 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45efad7b-cca0-4262-8c64-74d2cbbbfe5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.749693 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-client-ca\") pod \"controller-manager-557c4f9bc6-88rnx\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.749742 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-proxy-ca-bundles\") pod \"controller-manager-557c4f9bc6-88rnx\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.750773 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-config\") pod \"controller-manager-557c4f9bc6-88rnx\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.756671 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4a747a4-d415-41e8-be84-6f8763deeb74-serving-cert\") pod \"controller-manager-557c4f9bc6-88rnx\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.934066 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.934292 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:39 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:39 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:39 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.934346 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.943916 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.967036 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zpx9\" (UniqueName: \"kubernetes.io/projected/b4a747a4-d415-41e8-be84-6f8763deeb74-kube-api-access-6zpx9\") pod \"controller-manager-557c4f9bc6-88rnx\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.971556 4707 generic.go:334] "Generic (PLEG): container finished" podID="f00213e0-4841-456c-8c78-2ab1692951fb" containerID="addbf9ada5ce1e48d355beb275344fcda4512ca0674e08b85318245a4f611dae" exitCode=0 Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.972344 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f00213e0-4841-456c-8c78-2ab1692951fb","Type":"ContainerDied","Data":"addbf9ada5ce1e48d355beb275344fcda4512ca0674e08b85318245a4f611dae"} Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.973470 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" event={"ID":"4b01406f-154d-4f81-8371-f998b19039cd","Type":"ContainerDied","Data":"43dcfe3f898badb157c2ffe6e269c2682efbcf575e31e30472f2ec8201fb5b2a"} Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.973503 4707 scope.go:117] "RemoveContainer" containerID="385ed48d586b2582cd6d2af95e7aeb15a949a73d10569581b4c41ea56811eea8" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.973597 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.976461 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" event={"ID":"45efad7b-cca0-4262-8c64-74d2cbbbfe5b","Type":"ContainerDied","Data":"7e644882bf2368c51fdaf9eb9d5dde22f36ef40ee1230c097d2c04cec7495435"} Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.976537 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-52v2z" Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.977817 4707 generic.go:334] "Generic (PLEG): container finished" podID="f3b21338-6240-404f-af83-ffed1f5f3fe9" containerID="349c8162677183d852acfe5ce05921124984340fae1095b20a5757cfde77e426" exitCode=0 Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.977882 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f3b21338-6240-404f-af83-ffed1f5f3fe9","Type":"ContainerDied","Data":"349c8162677183d852acfe5ce05921124984340fae1095b20a5757cfde77e426"} Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.981946 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" event={"ID":"b383ce4b-acbd-4331-9244-27387ddc224a","Type":"ContainerStarted","Data":"20bbd4df2d4cdef877a34b958a3b9e3b882e49308487d4603e20f7c04bda1537"} Dec 13 06:56:39 crc kubenswrapper[4707]: I1213 06:56:39.999075 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xbx52" event={"ID":"cbd5a4ca-21f2-414a-a4c4-441959130e09","Type":"ContainerStarted","Data":"abd16cccae6bf2a72b46621b6ccf1a7454cf8b2a48604799ce9481da8ce7b685"} Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.001915 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.021039 4707 scope.go:117] "RemoveContainer" containerID="bcba363903b448249efa342e501a78fb96653edc1f7f19fa46cca0422d8ad102" Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.037884 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b01406f-154d-4f81-8371-f998b19039cd-config\") pod \"4b01406f-154d-4f81-8371-f998b19039cd\" (UID: \"4b01406f-154d-4f81-8371-f998b19039cd\") " Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.037918 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xl2h\" (UniqueName: \"kubernetes.io/projected/4b01406f-154d-4f81-8371-f998b19039cd-kube-api-access-5xl2h\") pod \"4b01406f-154d-4f81-8371-f998b19039cd\" (UID: \"4b01406f-154d-4f81-8371-f998b19039cd\") " Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.038020 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b01406f-154d-4f81-8371-f998b19039cd-serving-cert\") pod \"4b01406f-154d-4f81-8371-f998b19039cd\" (UID: \"4b01406f-154d-4f81-8371-f998b19039cd\") " Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.038044 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b01406f-154d-4f81-8371-f998b19039cd-client-ca\") pod \"4b01406f-154d-4f81-8371-f998b19039cd\" (UID: \"4b01406f-154d-4f81-8371-f998b19039cd\") " Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.038775 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b01406f-154d-4f81-8371-f998b19039cd-client-ca" (OuterVolumeSpecName: "client-ca") pod "4b01406f-154d-4f81-8371-f998b19039cd" (UID: "4b01406f-154d-4f81-8371-f998b19039cd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.047249 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" podStartSLOduration=85.047227001 podStartE2EDuration="1m25.047227001s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:40.044519646 +0000 UTC m=+108.541698693" watchObservedRunningTime="2025-12-13 06:56:40.047227001 +0000 UTC m=+108.544406048" Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.050280 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b01406f-154d-4f81-8371-f998b19039cd-kube-api-access-5xl2h" (OuterVolumeSpecName: "kube-api-access-5xl2h") pod "4b01406f-154d-4f81-8371-f998b19039cd" (UID: "4b01406f-154d-4f81-8371-f998b19039cd"). InnerVolumeSpecName "kube-api-access-5xl2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.058845 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b01406f-154d-4f81-8371-f998b19039cd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4b01406f-154d-4f81-8371-f998b19039cd" (UID: "4b01406f-154d-4f81-8371-f998b19039cd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.059321 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b01406f-154d-4f81-8371-f998b19039cd-config" (OuterVolumeSpecName: "config") pod "4b01406f-154d-4f81-8371-f998b19039cd" (UID: "4b01406f-154d-4f81-8371-f998b19039cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.149193 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b01406f-154d-4f81-8371-f998b19039cd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.149236 4707 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4b01406f-154d-4f81-8371-f998b19039cd-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.149250 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b01406f-154d-4f81-8371-f998b19039cd-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.149265 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xl2h\" (UniqueName: \"kubernetes.io/projected/4b01406f-154d-4f81-8371-f998b19039cd-kube-api-access-5xl2h\") on node \"crc\" DevicePath \"\"" Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.183294 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.188319 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52v2z"] Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.198449 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52v2z"] Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.201177 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-995mf"] Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.218136 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" podStartSLOduration=86.218117021 podStartE2EDuration="1m26.218117021s" podCreationTimestamp="2025-12-13 06:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:40.21723645 +0000 UTC m=+108.714415497" watchObservedRunningTime="2025-12-13 06:56:40.218117021 +0000 UTC m=+108.715296078" Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.277792 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-d2hk9" podStartSLOduration=86.277773342 podStartE2EDuration="1m26.277773342s" podCreationTimestamp="2025-12-13 06:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:40.27684206 +0000 UTC m=+108.774021107" watchObservedRunningTime="2025-12-13 06:56:40.277773342 +0000 UTC m=+108.774952389" Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.317590 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hvcjw" podStartSLOduration=85.31757149 podStartE2EDuration="1m25.31757149s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:40.313752719 +0000 UTC m=+108.810931766" watchObservedRunningTime="2025-12-13 06:56:40.31757149 +0000 UTC m=+108.814750537" Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.344473 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbgmc" podStartSLOduration=85.344456951 podStartE2EDuration="1m25.344456951s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:40.344312957 +0000 UTC m=+108.841492004" watchObservedRunningTime="2025-12-13 06:56:40.344456951 +0000 UTC m=+108.841635998" Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.413817 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9"] Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.419275 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5vgr9"] Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.515808 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-557c4f9bc6-88rnx"] Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.861926 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:40 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:40 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:40 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:40 crc kubenswrapper[4707]: I1213 06:56:40.862235 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.009767 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-995mf" event={"ID":"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e","Type":"ContainerStarted","Data":"a992605143d1d4000235aed84a993664fbedb1af2b927f8403e985bf38685f71"} Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.011408 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" event={"ID":"b4a747a4-d415-41e8-be84-6f8763deeb74","Type":"ContainerStarted","Data":"20ea8f70f785ba8fdc8c05c52d6c9bb61037ef7a6dd58426f2e79bb4750e393f"} Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.237007 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.244861 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.365840 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f00213e0-4841-456c-8c78-2ab1692951fb-kubelet-dir\") pod \"f00213e0-4841-456c-8c78-2ab1692951fb\" (UID: \"f00213e0-4841-456c-8c78-2ab1692951fb\") " Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.365933 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3b21338-6240-404f-af83-ffed1f5f3fe9-kube-api-access\") pod \"f3b21338-6240-404f-af83-ffed1f5f3fe9\" (UID: \"f3b21338-6240-404f-af83-ffed1f5f3fe9\") " Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.366025 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3b21338-6240-404f-af83-ffed1f5f3fe9-kubelet-dir\") pod \"f3b21338-6240-404f-af83-ffed1f5f3fe9\" (UID: \"f3b21338-6240-404f-af83-ffed1f5f3fe9\") " Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.366041 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f00213e0-4841-456c-8c78-2ab1692951fb-kube-api-access\") pod \"f00213e0-4841-456c-8c78-2ab1692951fb\" (UID: \"f00213e0-4841-456c-8c78-2ab1692951fb\") " Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.366338 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f00213e0-4841-456c-8c78-2ab1692951fb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f00213e0-4841-456c-8c78-2ab1692951fb" (UID: "f00213e0-4841-456c-8c78-2ab1692951fb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.366364 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3b21338-6240-404f-af83-ffed1f5f3fe9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f3b21338-6240-404f-af83-ffed1f5f3fe9" (UID: "f3b21338-6240-404f-af83-ffed1f5f3fe9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.366488 4707 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f00213e0-4841-456c-8c78-2ab1692951fb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.366505 4707 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3b21338-6240-404f-af83-ffed1f5f3fe9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.372258 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3b21338-6240-404f-af83-ffed1f5f3fe9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f3b21338-6240-404f-af83-ffed1f5f3fe9" (UID: "f3b21338-6240-404f-af83-ffed1f5f3fe9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.372410 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f00213e0-4841-456c-8c78-2ab1692951fb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f00213e0-4841-456c-8c78-2ab1692951fb" (UID: "f00213e0-4841-456c-8c78-2ab1692951fb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.468634 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3b21338-6240-404f-af83-ffed1f5f3fe9-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.468664 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f00213e0-4841-456c-8c78-2ab1692951fb-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.763327 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45efad7b-cca0-4262-8c64-74d2cbbbfe5b" path="/var/lib/kubelet/pods/45efad7b-cca0-4262-8c64-74d2cbbbfe5b/volumes" Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.765186 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b01406f-154d-4f81-8371-f998b19039cd" path="/var/lib/kubelet/pods/4b01406f-154d-4f81-8371-f998b19039cd/volumes" Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.861232 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:41 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:41 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:41 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:41 crc kubenswrapper[4707]: I1213 06:56:41.861564 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.021808 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f00213e0-4841-456c-8c78-2ab1692951fb","Type":"ContainerDied","Data":"32aa1c6306cae22d4df71c56b56bbea367f9d630f3a627bc85c34487d20e0a91"} Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.021875 4707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32aa1c6306cae22d4df71c56b56bbea367f9d630f3a627bc85c34487d20e0a91" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.022002 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.024058 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f3b21338-6240-404f-af83-ffed1f5f3fe9","Type":"ContainerDied","Data":"3ba70fea9703d5696f8bcbb2eaef967705425e0e4a638157006dc65965a99887"} Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.024086 4707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ba70fea9703d5696f8bcbb2eaef967705425e0e4a638157006dc65965a99887" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.024117 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.478942 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv"] Dec 13 06:56:42 crc kubenswrapper[4707]: E1213 06:56:42.479421 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00213e0-4841-456c-8c78-2ab1692951fb" containerName="pruner" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.479533 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00213e0-4841-456c-8c78-2ab1692951fb" containerName="pruner" Dec 13 06:56:42 crc kubenswrapper[4707]: E1213 06:56:42.479607 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b01406f-154d-4f81-8371-f998b19039cd" containerName="route-controller-manager" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.479663 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b01406f-154d-4f81-8371-f998b19039cd" containerName="route-controller-manager" Dec 13 06:56:42 crc kubenswrapper[4707]: E1213 06:56:42.479720 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b21338-6240-404f-af83-ffed1f5f3fe9" containerName="pruner" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.479780 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b21338-6240-404f-af83-ffed1f5f3fe9" containerName="pruner" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.479933 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3b21338-6240-404f-af83-ffed1f5f3fe9" containerName="pruner" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.480032 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b01406f-154d-4f81-8371-f998b19039cd" containerName="route-controller-manager" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.480100 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="f00213e0-4841-456c-8c78-2ab1692951fb" containerName="pruner" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.480518 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.482861 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.483087 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.483837 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.484508 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.485461 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.489017 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.494875 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv"] Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.589282 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mv7f\" (UniqueName: \"kubernetes.io/projected/38b438ee-9579-46f0-95d4-8e29bb14faa4-kube-api-access-9mv7f\") pod \"route-controller-manager-5c58458548-s6qzv\" (UID: \"38b438ee-9579-46f0-95d4-8e29bb14faa4\") " pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.589315 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b438ee-9579-46f0-95d4-8e29bb14faa4-client-ca\") pod \"route-controller-manager-5c58458548-s6qzv\" (UID: \"38b438ee-9579-46f0-95d4-8e29bb14faa4\") " pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.589367 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b438ee-9579-46f0-95d4-8e29bb14faa4-serving-cert\") pod \"route-controller-manager-5c58458548-s6qzv\" (UID: \"38b438ee-9579-46f0-95d4-8e29bb14faa4\") " pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.589621 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b438ee-9579-46f0-95d4-8e29bb14faa4-config\") pod \"route-controller-manager-5c58458548-s6qzv\" (UID: \"38b438ee-9579-46f0-95d4-8e29bb14faa4\") " pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.691160 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b438ee-9579-46f0-95d4-8e29bb14faa4-config\") pod \"route-controller-manager-5c58458548-s6qzv\" (UID: \"38b438ee-9579-46f0-95d4-8e29bb14faa4\") " pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.691312 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mv7f\" (UniqueName: \"kubernetes.io/projected/38b438ee-9579-46f0-95d4-8e29bb14faa4-kube-api-access-9mv7f\") pod \"route-controller-manager-5c58458548-s6qzv\" (UID: \"38b438ee-9579-46f0-95d4-8e29bb14faa4\") " pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.691358 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b438ee-9579-46f0-95d4-8e29bb14faa4-client-ca\") pod \"route-controller-manager-5c58458548-s6qzv\" (UID: \"38b438ee-9579-46f0-95d4-8e29bb14faa4\") " pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.691440 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b438ee-9579-46f0-95d4-8e29bb14faa4-serving-cert\") pod \"route-controller-manager-5c58458548-s6qzv\" (UID: \"38b438ee-9579-46f0-95d4-8e29bb14faa4\") " pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.693001 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b438ee-9579-46f0-95d4-8e29bb14faa4-client-ca\") pod \"route-controller-manager-5c58458548-s6qzv\" (UID: \"38b438ee-9579-46f0-95d4-8e29bb14faa4\") " pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.693121 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b438ee-9579-46f0-95d4-8e29bb14faa4-config\") pod \"route-controller-manager-5c58458548-s6qzv\" (UID: \"38b438ee-9579-46f0-95d4-8e29bb14faa4\") " pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.700306 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b438ee-9579-46f0-95d4-8e29bb14faa4-serving-cert\") pod \"route-controller-manager-5c58458548-s6qzv\" (UID: \"38b438ee-9579-46f0-95d4-8e29bb14faa4\") " pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.713991 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mv7f\" (UniqueName: \"kubernetes.io/projected/38b438ee-9579-46f0-95d4-8e29bb14faa4-kube-api-access-9mv7f\") pod \"route-controller-manager-5c58458548-s6qzv\" (UID: \"38b438ee-9579-46f0-95d4-8e29bb14faa4\") " pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.808392 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.861736 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:42 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:42 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:42 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:42 crc kubenswrapper[4707]: I1213 06:56:42.861813 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:43 crc kubenswrapper[4707]: I1213 06:56:43.030775 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv"] Dec 13 06:56:43 crc kubenswrapper[4707]: I1213 06:56:43.436468 4707 patch_prober.go:28] interesting pod/console-f9d7485db-6c4h6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 13 06:56:43 crc kubenswrapper[4707]: I1213 06:56:43.436525 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6c4h6" podUID="c73c65c7-bef5-4db7-8e02-bbe2b39f1758" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 13 06:56:43 crc kubenswrapper[4707]: I1213 06:56:43.457466 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:43 crc kubenswrapper[4707]: I1213 06:56:43.457505 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:43 crc kubenswrapper[4707]: I1213 06:56:43.464157 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:43 crc kubenswrapper[4707]: I1213 06:56:43.478640 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:43 crc kubenswrapper[4707]: I1213 06:56:43.479038 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:43 crc kubenswrapper[4707]: I1213 06:56:43.507866 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:43 crc kubenswrapper[4707]: I1213 06:56:43.860971 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:43 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:43 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:43 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:43 crc kubenswrapper[4707]: I1213 06:56:43.861020 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:44 crc kubenswrapper[4707]: I1213 06:56:44.043228 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" event={"ID":"38b438ee-9579-46f0-95d4-8e29bb14faa4","Type":"ContainerStarted","Data":"095386cbfd4c80b29a59101a60f56cb6794352459b5efca0293050273ce0c6ea"} Dec 13 06:56:44 crc kubenswrapper[4707]: I1213 06:56:44.049858 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-w8kpl" Dec 13 06:56:44 crc kubenswrapper[4707]: I1213 06:56:44.051027 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-82vdl" Dec 13 06:56:44 crc kubenswrapper[4707]: E1213 06:56:44.595319 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:56:44 crc kubenswrapper[4707]: E1213 06:56:44.596066 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:56:44 crc kubenswrapper[4707]: E1213 06:56:44.596502 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:56:44 crc kubenswrapper[4707]: E1213 06:56:44.596541 4707 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" Dec 13 06:56:44 crc kubenswrapper[4707]: I1213 06:56:44.860967 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:44 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:44 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:44 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:44 crc kubenswrapper[4707]: I1213 06:56:44.861028 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:45 crc kubenswrapper[4707]: I1213 06:56:45.061743 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-995mf" event={"ID":"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e","Type":"ContainerStarted","Data":"46be07b69cf10024389cc3ec5ea249a9d7283435d6f8404f8d800c3ac15a59b3"} Dec 13 06:56:45 crc kubenswrapper[4707]: I1213 06:56:45.862344 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:45 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:45 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:45 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:45 crc kubenswrapper[4707]: I1213 06:56:45.862731 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:46 crc kubenswrapper[4707]: I1213 06:56:46.085776 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-wtlqx_278a952c-12a9-469b-960b-d68b5f2d2f29/kube-multus-additional-cni-plugins/0.log" Dec 13 06:56:46 crc kubenswrapper[4707]: I1213 06:56:46.086025 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" event={"ID":"278a952c-12a9-469b-960b-d68b5f2d2f29","Type":"ContainerDied","Data":"6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5"} Dec 13 06:56:46 crc kubenswrapper[4707]: I1213 06:56:46.085878 4707 generic.go:334] "Generic (PLEG): container finished" podID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" exitCode=137 Dec 13 06:56:46 crc kubenswrapper[4707]: I1213 06:56:46.860252 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:46 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:46 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:46 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:46 crc kubenswrapper[4707]: I1213 06:56:46.860322 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:47 crc kubenswrapper[4707]: I1213 06:56:47.863014 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:47 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:47 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:47 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:47 crc kubenswrapper[4707]: I1213 06:56:47.863089 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:48 crc kubenswrapper[4707]: I1213 06:56:48.861458 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:48 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:48 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:48 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:48 crc kubenswrapper[4707]: I1213 06:56:48.862056 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:49 crc kubenswrapper[4707]: I1213 06:56:49.143098 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" event={"ID":"b4a747a4-d415-41e8-be84-6f8763deeb74","Type":"ContainerStarted","Data":"995b83c8eab9b8d4136d8f241cc2d2a5401c37a7d029c2fca717623e196aec80"} Dec 13 06:56:49 crc kubenswrapper[4707]: I1213 06:56:49.160046 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xbx52" event={"ID":"cbd5a4ca-21f2-414a-a4c4-441959130e09","Type":"ContainerStarted","Data":"f646c7d4a10ee2db0ce7658cd9c5c0ed2197679237b99c4e188daf5035af75cc"} Dec 13 06:56:49 crc kubenswrapper[4707]: I1213 06:56:49.862159 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:49 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:49 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:49 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:49 crc kubenswrapper[4707]: I1213 06:56:49.862222 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:50 crc kubenswrapper[4707]: I1213 06:56:50.376592 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-557c4f9bc6-88rnx"] Dec 13 06:56:50 crc kubenswrapper[4707]: I1213 06:56:50.376641 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv"] Dec 13 06:56:50 crc kubenswrapper[4707]: I1213 06:56:50.860397 4707 patch_prober.go:28] interesting pod/router-default-5444994796-d4qvp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 13 06:56:50 crc kubenswrapper[4707]: [-]has-synced failed: reason withheld Dec 13 06:56:50 crc kubenswrapper[4707]: [+]process-running ok Dec 13 06:56:50 crc kubenswrapper[4707]: healthz check failed Dec 13 06:56:50 crc kubenswrapper[4707]: I1213 06:56:50.860479 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d4qvp" podUID="6b1fa22e-132c-42d6-9327-f6d826d92373" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 13 06:56:51 crc kubenswrapper[4707]: I1213 06:56:51.862091 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:51 crc kubenswrapper[4707]: I1213 06:56:51.885167 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-d4qvp" Dec 13 06:56:52 crc kubenswrapper[4707]: I1213 06:56:52.208066 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" event={"ID":"b383ce4b-acbd-4331-9244-27387ddc224a","Type":"ContainerStarted","Data":"842191b888369e8e6d62b6f0c5ed90565d3a68d545af26eda8bb0eb24cb8e0c7"} Dec 13 06:56:52 crc kubenswrapper[4707]: I1213 06:56:52.218277 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" event={"ID":"38b438ee-9579-46f0-95d4-8e29bb14faa4","Type":"ContainerStarted","Data":"aec5a5a3dbb6a4a5ddccbf45bef477f6cd6e41cabdbe48e9df85b1e37965770a"} Dec 13 06:56:52 crc kubenswrapper[4707]: I1213 06:56:52.218694 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:56:52 crc kubenswrapper[4707]: I1213 06:56:52.239430 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-995mf" podStartSLOduration=97.239416 podStartE2EDuration="1m37.239416s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:52.235163108 +0000 UTC m=+120.732342165" watchObservedRunningTime="2025-12-13 06:56:52.239416 +0000 UTC m=+120.736595047" Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.010311 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.011782 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.012779 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.015092 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.017160 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.076477 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87065476-d5a8-4630-883d-60d255622c72-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"87065476-d5a8-4630-883d-60d255622c72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.076614 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87065476-d5a8-4630-883d-60d255622c72-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"87065476-d5a8-4630-883d-60d255622c72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.177913 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87065476-d5a8-4630-883d-60d255622c72-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"87065476-d5a8-4630-883d-60d255622c72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.178032 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87065476-d5a8-4630-883d-60d255622c72-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"87065476-d5a8-4630-883d-60d255622c72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.178132 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87065476-d5a8-4630-883d-60d255622c72-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"87065476-d5a8-4630-883d-60d255622c72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.197070 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87065476-d5a8-4630-883d-60d255622c72-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"87065476-d5a8-4630-883d-60d255622c72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.228704 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" containerID="cri-o://995b83c8eab9b8d4136d8f241cc2d2a5401c37a7d029c2fca717623e196aec80" gracePeriod=30 Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.229047 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.237004 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.272793 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" podStartSLOduration=22.272764087 podStartE2EDuration="22.272764087s" podCreationTimestamp="2025-12-13 06:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:53.268514976 +0000 UTC m=+121.765694023" watchObservedRunningTime="2025-12-13 06:56:53.272764087 +0000 UTC m=+121.769943134" Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.273322 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xbx52" podStartSLOduration=98.27331696 podStartE2EDuration="1m38.27331696s" podCreationTimestamp="2025-12-13 06:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:56:53.249617846 +0000 UTC m=+121.746796893" watchObservedRunningTime="2025-12-13 06:56:53.27331696 +0000 UTC m=+121.770496007" Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.348295 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.436164 4707 patch_prober.go:28] interesting pod/console-f9d7485db-6c4h6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 13 06:56:53 crc kubenswrapper[4707]: I1213 06:56:53.436214 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6c4h6" podUID="c73c65c7-bef5-4db7-8e02-bbe2b39f1758" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 13 06:56:54 crc kubenswrapper[4707]: E1213 06:56:54.594225 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:56:54 crc kubenswrapper[4707]: E1213 06:56:54.596717 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:56:54 crc kubenswrapper[4707]: E1213 06:56:54.597011 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:56:54 crc kubenswrapper[4707]: E1213 06:56:54.597041 4707 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" Dec 13 06:56:57 crc kubenswrapper[4707]: I1213 06:56:57.217902 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 13 06:56:57 crc kubenswrapper[4707]: I1213 06:56:57.218868 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 13 06:56:57 crc kubenswrapper[4707]: I1213 06:56:57.221462 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 13 06:56:57 crc kubenswrapper[4707]: I1213 06:56:57.352143 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e6767a5-947b-454d-80b7-585d10aebb4f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4e6767a5-947b-454d-80b7-585d10aebb4f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 06:56:57 crc kubenswrapper[4707]: I1213 06:56:57.352197 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4e6767a5-947b-454d-80b7-585d10aebb4f-var-lock\") pod \"installer-9-crc\" (UID: \"4e6767a5-947b-454d-80b7-585d10aebb4f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 06:56:57 crc kubenswrapper[4707]: I1213 06:56:57.352321 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e6767a5-947b-454d-80b7-585d10aebb4f-kube-api-access\") pod \"installer-9-crc\" (UID: \"4e6767a5-947b-454d-80b7-585d10aebb4f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 06:56:57 crc kubenswrapper[4707]: I1213 06:56:57.453771 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4e6767a5-947b-454d-80b7-585d10aebb4f-var-lock\") pod \"installer-9-crc\" (UID: \"4e6767a5-947b-454d-80b7-585d10aebb4f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 06:56:57 crc kubenswrapper[4707]: I1213 06:56:57.453839 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e6767a5-947b-454d-80b7-585d10aebb4f-kube-api-access\") pod \"installer-9-crc\" (UID: \"4e6767a5-947b-454d-80b7-585d10aebb4f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 06:56:57 crc kubenswrapper[4707]: I1213 06:56:57.453911 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e6767a5-947b-454d-80b7-585d10aebb4f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4e6767a5-947b-454d-80b7-585d10aebb4f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 06:56:57 crc kubenswrapper[4707]: I1213 06:56:57.454010 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e6767a5-947b-454d-80b7-585d10aebb4f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4e6767a5-947b-454d-80b7-585d10aebb4f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 06:56:57 crc kubenswrapper[4707]: I1213 06:56:57.454985 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4e6767a5-947b-454d-80b7-585d10aebb4f-var-lock\") pod \"installer-9-crc\" (UID: \"4e6767a5-947b-454d-80b7-585d10aebb4f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 06:56:57 crc kubenswrapper[4707]: I1213 06:56:57.475879 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e6767a5-947b-454d-80b7-585d10aebb4f-kube-api-access\") pod \"installer-9-crc\" (UID: \"4e6767a5-947b-454d-80b7-585d10aebb4f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 13 06:56:57 crc kubenswrapper[4707]: I1213 06:56:57.553775 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 13 06:57:00 crc kubenswrapper[4707]: I1213 06:57:00.045571 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 06:57:00 crc kubenswrapper[4707]: I1213 06:57:00.185280 4707 patch_prober.go:28] interesting pod/controller-manager-557c4f9bc6-88rnx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Dec 13 06:57:00 crc kubenswrapper[4707]: I1213 06:57:00.185364 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Dec 13 06:57:02 crc kubenswrapper[4707]: I1213 06:57:02.286333 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" containerID="cri-o://aec5a5a3dbb6a4a5ddccbf45bef477f6cd6e41cabdbe48e9df85b1e37965770a" gracePeriod=30 Dec 13 06:57:02 crc kubenswrapper[4707]: I1213 06:57:02.286533 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:57:02 crc kubenswrapper[4707]: I1213 06:57:02.292271 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:57:02 crc kubenswrapper[4707]: I1213 06:57:02.305049 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" podStartSLOduration=31.305031999 podStartE2EDuration="31.305031999s" podCreationTimestamp="2025-12-13 06:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:57:02.304938067 +0000 UTC m=+130.802117114" watchObservedRunningTime="2025-12-13 06:57:02.305031999 +0000 UTC m=+130.802211046" Dec 13 06:57:02 crc kubenswrapper[4707]: I1213 06:57:02.809613 4707 patch_prober.go:28] interesting pod/route-controller-manager-5c58458548-s6qzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Dec 13 06:57:02 crc kubenswrapper[4707]: I1213 06:57:02.809679 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Dec 13 06:57:03 crc kubenswrapper[4707]: I1213 06:57:03.440843 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:57:03 crc kubenswrapper[4707]: I1213 06:57:03.455313 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-6c4h6" Dec 13 06:57:04 crc kubenswrapper[4707]: E1213 06:57:04.594933 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:04 crc kubenswrapper[4707]: E1213 06:57:04.596239 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:04 crc kubenswrapper[4707]: E1213 06:57:04.596618 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:04 crc kubenswrapper[4707]: E1213 06:57:04.596669 4707 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" Dec 13 06:57:04 crc kubenswrapper[4707]: I1213 06:57:04.848848 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-d7vq8" Dec 13 06:57:06 crc kubenswrapper[4707]: I1213 06:57:06.314071 4707 generic.go:334] "Generic (PLEG): container finished" podID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerID="995b83c8eab9b8d4136d8f241cc2d2a5401c37a7d029c2fca717623e196aec80" exitCode=0 Dec 13 06:57:06 crc kubenswrapper[4707]: I1213 06:57:06.314124 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" event={"ID":"b4a747a4-d415-41e8-be84-6f8763deeb74","Type":"ContainerDied","Data":"995b83c8eab9b8d4136d8f241cc2d2a5401c37a7d029c2fca717623e196aec80"} Dec 13 06:57:10 crc kubenswrapper[4707]: I1213 06:57:10.185002 4707 patch_prober.go:28] interesting pod/controller-manager-557c4f9bc6-88rnx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Dec 13 06:57:10 crc kubenswrapper[4707]: I1213 06:57:10.185358 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Dec 13 06:57:11 crc kubenswrapper[4707]: I1213 06:57:11.633600 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 13 06:57:12 crc kubenswrapper[4707]: I1213 06:57:12.809898 4707 patch_prober.go:28] interesting pod/route-controller-manager-5c58458548-s6qzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Dec 13 06:57:12 crc kubenswrapper[4707]: I1213 06:57:12.810482 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Dec 13 06:57:14 crc kubenswrapper[4707]: E1213 06:57:14.594082 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:14 crc kubenswrapper[4707]: E1213 06:57:14.594807 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:14 crc kubenswrapper[4707]: E1213 06:57:14.598020 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:14 crc kubenswrapper[4707]: E1213 06:57:14.598081 4707 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" Dec 13 06:57:20 crc kubenswrapper[4707]: I1213 06:57:20.184866 4707 patch_prober.go:28] interesting pod/controller-manager-557c4f9bc6-88rnx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Dec 13 06:57:20 crc kubenswrapper[4707]: I1213 06:57:20.185253 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Dec 13 06:57:22 crc kubenswrapper[4707]: I1213 06:57:22.809646 4707 patch_prober.go:28] interesting pod/route-controller-manager-5c58458548-s6qzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Dec 13 06:57:22 crc kubenswrapper[4707]: I1213 06:57:22.809773 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Dec 13 06:57:24 crc kubenswrapper[4707]: E1213 06:57:24.594296 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:24 crc kubenswrapper[4707]: E1213 06:57:24.595190 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:24 crc kubenswrapper[4707]: E1213 06:57:24.595621 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:24 crc kubenswrapper[4707]: E1213 06:57:24.595678 4707 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" Dec 13 06:57:30 crc kubenswrapper[4707]: I1213 06:57:30.185288 4707 patch_prober.go:28] interesting pod/controller-manager-557c4f9bc6-88rnx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Dec 13 06:57:30 crc kubenswrapper[4707]: I1213 06:57:30.185690 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Dec 13 06:57:32 crc kubenswrapper[4707]: I1213 06:57:32.810065 4707 patch_prober.go:28] interesting pod/route-controller-manager-5c58458548-s6qzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Dec 13 06:57:32 crc kubenswrapper[4707]: I1213 06:57:32.810160 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Dec 13 06:57:33 crc kubenswrapper[4707]: I1213 06:57:33.576332 4707 generic.go:334] "Generic (PLEG): container finished" podID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerID="aec5a5a3dbb6a4a5ddccbf45bef477f6cd6e41cabdbe48e9df85b1e37965770a" exitCode=0 Dec 13 06:57:33 crc kubenswrapper[4707]: I1213 06:57:33.576386 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" event={"ID":"38b438ee-9579-46f0-95d4-8e29bb14faa4","Type":"ContainerDied","Data":"aec5a5a3dbb6a4a5ddccbf45bef477f6cd6e41cabdbe48e9df85b1e37965770a"} Dec 13 06:57:34 crc kubenswrapper[4707]: E1213 06:57:34.594707 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:34 crc kubenswrapper[4707]: E1213 06:57:34.595293 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:34 crc kubenswrapper[4707]: E1213 06:57:34.595586 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:34 crc kubenswrapper[4707]: E1213 06:57:34.595622 4707 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" Dec 13 06:57:40 crc kubenswrapper[4707]: I1213 06:57:40.185336 4707 patch_prober.go:28] interesting pod/controller-manager-557c4f9bc6-88rnx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Dec 13 06:57:40 crc kubenswrapper[4707]: I1213 06:57:40.186024 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Dec 13 06:57:42 crc kubenswrapper[4707]: I1213 06:57:42.810000 4707 patch_prober.go:28] interesting pod/route-controller-manager-5c58458548-s6qzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Dec 13 06:57:42 crc kubenswrapper[4707]: I1213 06:57:42.810361 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Dec 13 06:57:44 crc kubenswrapper[4707]: I1213 06:57:44.476120 4707 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-nlwlz container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 06:57:44 crc kubenswrapper[4707]: I1213 06:57:44.476200 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" podUID="5c02287e-e126-49cd-94a2-5e0596607990" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 06:57:44 crc kubenswrapper[4707]: E1213 06:57:44.594777 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:44 crc kubenswrapper[4707]: E1213 06:57:44.595366 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:44 crc kubenswrapper[4707]: E1213 06:57:44.595818 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:44 crc kubenswrapper[4707]: E1213 06:57:44.595858 4707 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" Dec 13 06:57:50 crc kubenswrapper[4707]: I1213 06:57:50.186458 4707 patch_prober.go:28] interesting pod/controller-manager-557c4f9bc6-88rnx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Dec 13 06:57:50 crc kubenswrapper[4707]: I1213 06:57:50.186804 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Dec 13 06:57:52 crc kubenswrapper[4707]: I1213 06:57:52.810006 4707 patch_prober.go:28] interesting pod/route-controller-manager-5c58458548-s6qzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Dec 13 06:57:52 crc kubenswrapper[4707]: I1213 06:57:52.810066 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Dec 13 06:57:53 crc kubenswrapper[4707]: I1213 06:57:53.348865 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 06:57:53 crc kubenswrapper[4707]: I1213 06:57:53.348984 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 06:57:54 crc kubenswrapper[4707]: E1213 06:57:54.594354 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:54 crc kubenswrapper[4707]: E1213 06:57:54.595077 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:54 crc kubenswrapper[4707]: E1213 06:57:54.595591 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:57:54 crc kubenswrapper[4707]: E1213 06:57:54.595632 4707 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" Dec 13 06:58:00 crc kubenswrapper[4707]: I1213 06:58:00.185189 4707 patch_prober.go:28] interesting pod/controller-manager-557c4f9bc6-88rnx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Dec 13 06:58:00 crc kubenswrapper[4707]: I1213 06:58:00.185515 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Dec 13 06:58:02 crc kubenswrapper[4707]: I1213 06:58:02.809293 4707 patch_prober.go:28] interesting pod/route-controller-manager-5c58458548-s6qzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Dec 13 06:58:02 crc kubenswrapper[4707]: I1213 06:58:02.809404 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Dec 13 06:58:04 crc kubenswrapper[4707]: E1213 06:58:04.594815 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:58:04 crc kubenswrapper[4707]: E1213 06:58:04.596664 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:58:04 crc kubenswrapper[4707]: E1213 06:58:04.597615 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:58:04 crc kubenswrapper[4707]: E1213 06:58:04.597702 4707 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" Dec 13 06:58:10 crc kubenswrapper[4707]: I1213 06:58:10.185864 4707 patch_prober.go:28] interesting pod/controller-manager-557c4f9bc6-88rnx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Dec 13 06:58:10 crc kubenswrapper[4707]: I1213 06:58:10.186479 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Dec 13 06:58:12 crc kubenswrapper[4707]: I1213 06:58:12.809496 4707 patch_prober.go:28] interesting pod/route-controller-manager-5c58458548-s6qzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Dec 13 06:58:12 crc kubenswrapper[4707]: I1213 06:58:12.809539 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Dec 13 06:58:14 crc kubenswrapper[4707]: E1213 06:58:14.594475 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:58:14 crc kubenswrapper[4707]: E1213 06:58:14.595039 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:58:14 crc kubenswrapper[4707]: E1213 06:58:14.595329 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 13 06:58:14 crc kubenswrapper[4707]: E1213 06:58:14.595366 4707 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.370054 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-wtlqx_278a952c-12a9-469b-960b-d68b5f2d2f29/kube-multus-additional-cni-plugins/0.log" Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.370220 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.569843 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/278a952c-12a9-469b-960b-d68b5f2d2f29-cni-sysctl-allowlist\") pod \"278a952c-12a9-469b-960b-d68b5f2d2f29\" (UID: \"278a952c-12a9-469b-960b-d68b5f2d2f29\") " Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.569921 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75vfk\" (UniqueName: \"kubernetes.io/projected/278a952c-12a9-469b-960b-d68b5f2d2f29-kube-api-access-75vfk\") pod \"278a952c-12a9-469b-960b-d68b5f2d2f29\" (UID: \"278a952c-12a9-469b-960b-d68b5f2d2f29\") " Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.570015 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/278a952c-12a9-469b-960b-d68b5f2d2f29-ready\") pod \"278a952c-12a9-469b-960b-d68b5f2d2f29\" (UID: \"278a952c-12a9-469b-960b-d68b5f2d2f29\") " Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.570055 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/278a952c-12a9-469b-960b-d68b5f2d2f29-tuning-conf-dir\") pod \"278a952c-12a9-469b-960b-d68b5f2d2f29\" (UID: \"278a952c-12a9-469b-960b-d68b5f2d2f29\") " Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.570291 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/278a952c-12a9-469b-960b-d68b5f2d2f29-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "278a952c-12a9-469b-960b-d68b5f2d2f29" (UID: "278a952c-12a9-469b-960b-d68b5f2d2f29"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.570695 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/278a952c-12a9-469b-960b-d68b5f2d2f29-ready" (OuterVolumeSpecName: "ready") pod "278a952c-12a9-469b-960b-d68b5f2d2f29" (UID: "278a952c-12a9-469b-960b-d68b5f2d2f29"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.570844 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/278a952c-12a9-469b-960b-d68b5f2d2f29-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "278a952c-12a9-469b-960b-d68b5f2d2f29" (UID: "278a952c-12a9-469b-960b-d68b5f2d2f29"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.576897 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/278a952c-12a9-469b-960b-d68b5f2d2f29-kube-api-access-75vfk" (OuterVolumeSpecName: "kube-api-access-75vfk") pod "278a952c-12a9-469b-960b-d68b5f2d2f29" (UID: "278a952c-12a9-469b-960b-d68b5f2d2f29"). InnerVolumeSpecName "kube-api-access-75vfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.671486 4707 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/278a952c-12a9-469b-960b-d68b5f2d2f29-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.671525 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75vfk\" (UniqueName: \"kubernetes.io/projected/278a952c-12a9-469b-960b-d68b5f2d2f29-kube-api-access-75vfk\") on node \"crc\" DevicePath \"\"" Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.671536 4707 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/278a952c-12a9-469b-960b-d68b5f2d2f29-ready\") on node \"crc\" DevicePath \"\"" Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.671547 4707 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/278a952c-12a9-469b-960b-d68b5f2d2f29-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.823559 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-wtlqx_278a952c-12a9-469b-960b-d68b5f2d2f29/kube-multus-additional-cni-plugins/0.log" Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.823607 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" event={"ID":"278a952c-12a9-469b-960b-d68b5f2d2f29","Type":"ContainerDied","Data":"e3a0f0ac402076cba334dc1ac9b94b3346b66c066274e6d3195ad7160d8b1f23"} Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.823646 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-wtlqx" Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.823662 4707 scope.go:117] "RemoveContainer" containerID="6a6b603350c14eda469202c29e689a6c739648d42bbf898fae8366ff9d9a7bb5" Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.842988 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-wtlqx"] Dec 13 06:58:15 crc kubenswrapper[4707]: I1213 06:58:15.845870 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-wtlqx"] Dec 13 06:58:17 crc kubenswrapper[4707]: I1213 06:58:17.751606 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" path="/var/lib/kubelet/pods/278a952c-12a9-469b-960b-d68b5f2d2f29/volumes" Dec 13 06:58:20 crc kubenswrapper[4707]: I1213 06:58:20.185161 4707 patch_prober.go:28] interesting pod/controller-manager-557c4f9bc6-88rnx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Dec 13 06:58:20 crc kubenswrapper[4707]: I1213 06:58:20.186036 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Dec 13 06:58:22 crc kubenswrapper[4707]: I1213 06:58:22.809802 4707 patch_prober.go:28] interesting pod/route-controller-manager-5c58458548-s6qzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Dec 13 06:58:22 crc kubenswrapper[4707]: I1213 06:58:22.810252 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Dec 13 06:58:23 crc kubenswrapper[4707]: I1213 06:58:23.348052 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 06:58:23 crc kubenswrapper[4707]: I1213 06:58:23.348125 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 06:58:30 crc kubenswrapper[4707]: I1213 06:58:30.185042 4707 patch_prober.go:28] interesting pod/controller-manager-557c4f9bc6-88rnx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Dec 13 06:58:30 crc kubenswrapper[4707]: I1213 06:58:30.185393 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Dec 13 06:58:30 crc kubenswrapper[4707]: I1213 06:58:30.185484 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:58:32 crc kubenswrapper[4707]: I1213 06:58:32.809898 4707 patch_prober.go:28] interesting pod/route-controller-manager-5c58458548-s6qzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Dec 13 06:58:32 crc kubenswrapper[4707]: I1213 06:58:32.810047 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Dec 13 06:58:32 crc kubenswrapper[4707]: I1213 06:58:32.810185 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:58:40 crc kubenswrapper[4707]: I1213 06:58:40.186452 4707 patch_prober.go:28] interesting pod/controller-manager-557c4f9bc6-88rnx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Dec 13 06:58:40 crc kubenswrapper[4707]: I1213 06:58:40.187060 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Dec 13 06:58:42 crc kubenswrapper[4707]: I1213 06:58:42.809387 4707 patch_prober.go:28] interesting pod/route-controller-manager-5c58458548-s6qzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Dec 13 06:58:42 crc kubenswrapper[4707]: I1213 06:58:42.809439 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Dec 13 06:58:51 crc kubenswrapper[4707]: I1213 06:58:51.184997 4707 patch_prober.go:28] interesting pod/controller-manager-557c4f9bc6-88rnx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 06:58:51 crc kubenswrapper[4707]: I1213 06:58:51.185517 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 06:58:51 crc kubenswrapper[4707]: E1213 06:58:51.950271 4707 log.go:32] "RemovePodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="7e644882bf2368c51fdaf9eb9d5dde22f36ef40ee1230c097d2c04cec7495435" Dec 13 06:58:51 crc kubenswrapper[4707]: E1213 06:58:51.950379 4707 kuberuntime_gc.go:184] "Failed to remove sandbox" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" sandboxID="7e644882bf2368c51fdaf9eb9d5dde22f36ef40ee1230c097d2c04cec7495435" Dec 13 06:58:52 crc kubenswrapper[4707]: E1213 06:58:52.273903 4707 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 13 06:58:52 crc kubenswrapper[4707]: E1213 06:58:52.274273 4707 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mn7c4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2t9v8_openshift-marketplace(6f152c64-57be-4ac0-8422-d5d504f02cf8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 06:58:52 crc kubenswrapper[4707]: E1213 06:58:52.275457 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2t9v8" podUID="6f152c64-57be-4ac0-8422-d5d504f02cf8" Dec 13 06:58:52 crc kubenswrapper[4707]: E1213 06:58:52.510181 4707 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Dec 13 06:58:52 crc kubenswrapper[4707]: E1213 06:58:52.510267 4707 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Dec 13 06:58:52 crc kubenswrapper[4707]: I1213 06:58:52.510326 4707 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Dec 13 06:58:53 crc kubenswrapper[4707]: I1213 06:58:53.347820 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 06:58:53 crc kubenswrapper[4707]: I1213 06:58:53.347886 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 06:58:53 crc kubenswrapper[4707]: I1213 06:58:53.347938 4707 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 06:58:53 crc kubenswrapper[4707]: I1213 06:58:53.348799 4707 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07bab14d70cf4aec6174e412c18e36ec4459ceec36c5b09bffa55fe19dcfc1a2"} pod="openshift-machine-config-operator/machine-config-daemon-ns45v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 06:58:53 crc kubenswrapper[4707]: I1213 06:58:53.348925 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" containerID="cri-o://07bab14d70cf4aec6174e412c18e36ec4459ceec36c5b09bffa55fe19dcfc1a2" gracePeriod=600 Dec 13 06:58:53 crc kubenswrapper[4707]: I1213 06:58:53.809359 4707 patch_prober.go:28] interesting pod/route-controller-manager-5c58458548-s6qzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 06:58:53 crc kubenswrapper[4707]: I1213 06:58:53.809683 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 06:58:55 crc kubenswrapper[4707]: I1213 06:58:55.099917 4707 generic.go:334] "Generic (PLEG): container finished" podID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerID="07bab14d70cf4aec6174e412c18e36ec4459ceec36c5b09bffa55fe19dcfc1a2" exitCode=0 Dec 13 06:58:55 crc kubenswrapper[4707]: I1213 06:58:55.099982 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" event={"ID":"5d5a779a-7592-4da7-950b-3a0679d7a4bf","Type":"ContainerDied","Data":"07bab14d70cf4aec6174e412c18e36ec4459ceec36c5b09bffa55fe19dcfc1a2"} Dec 13 06:59:01 crc kubenswrapper[4707]: I1213 06:59:01.185695 4707 patch_prober.go:28] interesting pod/controller-manager-557c4f9bc6-88rnx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 06:59:01 crc kubenswrapper[4707]: I1213 06:59:01.186134 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 06:59:03 crc kubenswrapper[4707]: I1213 06:59:03.809231 4707 patch_prober.go:28] interesting pod/route-controller-manager-5c58458548-s6qzv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 13 06:59:03 crc kubenswrapper[4707]: I1213 06:59:03.809366 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 06:59:06 crc kubenswrapper[4707]: E1213 06:59:06.724281 4707 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 13 06:59:06 crc kubenswrapper[4707]: E1213 06:59:06.724656 4707 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmhrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-75wd8_openshift-marketplace(fb1a1a41-b86e-4de7-af57-7862986b3037): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.725217 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:59:06 crc kubenswrapper[4707]: E1213 06:59:06.727384 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-75wd8" podUID="fb1a1a41-b86e-4de7-af57-7862986b3037" Dec 13 06:59:06 crc kubenswrapper[4707]: E1213 06:59:06.729669 4707 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 13 06:59:06 crc kubenswrapper[4707]: E1213 06:59:06.729779 4707 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dxwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cwcjf_openshift-marketplace(90f36b9c-fa8d-4257-ad64-2ee8a487d037): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 06:59:06 crc kubenswrapper[4707]: E1213 06:59:06.730902 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cwcjf" podUID="90f36b9c-fa8d-4257-ad64-2ee8a487d037" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.731797 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.762487 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67c75588d5-4s7xs"] Dec 13 06:59:06 crc kubenswrapper[4707]: E1213 06:59:06.771996 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.772015 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" Dec 13 06:59:06 crc kubenswrapper[4707]: E1213 06:59:06.772026 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.772047 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" Dec 13 06:59:06 crc kubenswrapper[4707]: E1213 06:59:06.772061 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.772067 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.772167 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" containerName="route-controller-manager" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.772179 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="278a952c-12a9-469b-960b-d68b5f2d2f29" containerName="kube-multus-additional-cni-plugins" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.772188 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" containerName="controller-manager" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.773253 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67c75588d5-4s7xs"] Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.773350 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:06 crc kubenswrapper[4707]: E1213 06:59:06.810916 4707 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 13 06:59:06 crc kubenswrapper[4707]: E1213 06:59:06.811120 4707 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blslw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5s7zh_openshift-marketplace(0291ce56-de87-4ca7-b7d1-8b4b68de5cbb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 06:59:06 crc kubenswrapper[4707]: E1213 06:59:06.813300 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5s7zh" podUID="0291ce56-de87-4ca7-b7d1-8b4b68de5cbb" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.861365 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b438ee-9579-46f0-95d4-8e29bb14faa4-serving-cert\") pod \"38b438ee-9579-46f0-95d4-8e29bb14faa4\" (UID: \"38b438ee-9579-46f0-95d4-8e29bb14faa4\") " Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.861444 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4a747a4-d415-41e8-be84-6f8763deeb74-serving-cert\") pod \"b4a747a4-d415-41e8-be84-6f8763deeb74\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.861778 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b438ee-9579-46f0-95d4-8e29bb14faa4-config\") pod \"38b438ee-9579-46f0-95d4-8e29bb14faa4\" (UID: \"38b438ee-9579-46f0-95d4-8e29bb14faa4\") " Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.861799 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-client-ca\") pod \"b4a747a4-d415-41e8-be84-6f8763deeb74\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.861817 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-config\") pod \"b4a747a4-d415-41e8-be84-6f8763deeb74\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.861839 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mv7f\" (UniqueName: \"kubernetes.io/projected/38b438ee-9579-46f0-95d4-8e29bb14faa4-kube-api-access-9mv7f\") pod \"38b438ee-9579-46f0-95d4-8e29bb14faa4\" (UID: \"38b438ee-9579-46f0-95d4-8e29bb14faa4\") " Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.861854 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b438ee-9579-46f0-95d4-8e29bb14faa4-client-ca\") pod \"38b438ee-9579-46f0-95d4-8e29bb14faa4\" (UID: \"38b438ee-9579-46f0-95d4-8e29bb14faa4\") " Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.861874 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zpx9\" (UniqueName: \"kubernetes.io/projected/b4a747a4-d415-41e8-be84-6f8763deeb74-kube-api-access-6zpx9\") pod \"b4a747a4-d415-41e8-be84-6f8763deeb74\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.861893 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-proxy-ca-bundles\") pod \"b4a747a4-d415-41e8-be84-6f8763deeb74\" (UID: \"b4a747a4-d415-41e8-be84-6f8763deeb74\") " Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.862052 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-proxy-ca-bundles\") pod \"controller-manager-67c75588d5-4s7xs\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.862081 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f458a7da-8d17-4066-8aca-437fd48f9b7d-serving-cert\") pod \"controller-manager-67c75588d5-4s7xs\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.862119 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhzhz\" (UniqueName: \"kubernetes.io/projected/f458a7da-8d17-4066-8aca-437fd48f9b7d-kube-api-access-qhzhz\") pod \"controller-manager-67c75588d5-4s7xs\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.862154 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-client-ca\") pod \"controller-manager-67c75588d5-4s7xs\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.862177 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-config\") pod \"controller-manager-67c75588d5-4s7xs\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.863320 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b4a747a4-d415-41e8-be84-6f8763deeb74" (UID: "b4a747a4-d415-41e8-be84-6f8763deeb74"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.863717 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b438ee-9579-46f0-95d4-8e29bb14faa4-config" (OuterVolumeSpecName: "config") pod "38b438ee-9579-46f0-95d4-8e29bb14faa4" (UID: "38b438ee-9579-46f0-95d4-8e29bb14faa4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.863778 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-client-ca" (OuterVolumeSpecName: "client-ca") pod "b4a747a4-d415-41e8-be84-6f8763deeb74" (UID: "b4a747a4-d415-41e8-be84-6f8763deeb74"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.864069 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b438ee-9579-46f0-95d4-8e29bb14faa4-client-ca" (OuterVolumeSpecName: "client-ca") pod "38b438ee-9579-46f0-95d4-8e29bb14faa4" (UID: "38b438ee-9579-46f0-95d4-8e29bb14faa4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.864637 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-config" (OuterVolumeSpecName: "config") pod "b4a747a4-d415-41e8-be84-6f8763deeb74" (UID: "b4a747a4-d415-41e8-be84-6f8763deeb74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.869843 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a747a4-d415-41e8-be84-6f8763deeb74-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b4a747a4-d415-41e8-be84-6f8763deeb74" (UID: "b4a747a4-d415-41e8-be84-6f8763deeb74"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.872158 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b438ee-9579-46f0-95d4-8e29bb14faa4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "38b438ee-9579-46f0-95d4-8e29bb14faa4" (UID: "38b438ee-9579-46f0-95d4-8e29bb14faa4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.873180 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b438ee-9579-46f0-95d4-8e29bb14faa4-kube-api-access-9mv7f" (OuterVolumeSpecName: "kube-api-access-9mv7f") pod "38b438ee-9579-46f0-95d4-8e29bb14faa4" (UID: "38b438ee-9579-46f0-95d4-8e29bb14faa4"). InnerVolumeSpecName "kube-api-access-9mv7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.874252 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a747a4-d415-41e8-be84-6f8763deeb74-kube-api-access-6zpx9" (OuterVolumeSpecName: "kube-api-access-6zpx9") pod "b4a747a4-d415-41e8-be84-6f8763deeb74" (UID: "b4a747a4-d415-41e8-be84-6f8763deeb74"). InnerVolumeSpecName "kube-api-access-6zpx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.903206 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.962623 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-client-ca\") pod \"controller-manager-67c75588d5-4s7xs\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.962664 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-config\") pod \"controller-manager-67c75588d5-4s7xs\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.962692 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-proxy-ca-bundles\") pod \"controller-manager-67c75588d5-4s7xs\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.962711 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f458a7da-8d17-4066-8aca-437fd48f9b7d-serving-cert\") pod \"controller-manager-67c75588d5-4s7xs\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.962748 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhzhz\" (UniqueName: \"kubernetes.io/projected/f458a7da-8d17-4066-8aca-437fd48f9b7d-kube-api-access-qhzhz\") pod \"controller-manager-67c75588d5-4s7xs\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.962782 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b438ee-9579-46f0-95d4-8e29bb14faa4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.962792 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4a747a4-d415-41e8-be84-6f8763deeb74-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.962801 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b438ee-9579-46f0-95d4-8e29bb14faa4-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.962809 4707 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.962817 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.962825 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mv7f\" (UniqueName: \"kubernetes.io/projected/38b438ee-9579-46f0-95d4-8e29bb14faa4-kube-api-access-9mv7f\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.962833 4707 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b438ee-9579-46f0-95d4-8e29bb14faa4-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.962841 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zpx9\" (UniqueName: \"kubernetes.io/projected/b4a747a4-d415-41e8-be84-6f8763deeb74-kube-api-access-6zpx9\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.962849 4707 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4a747a4-d415-41e8-be84-6f8763deeb74-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.965582 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-config\") pod \"controller-manager-67c75588d5-4s7xs\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.966572 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-client-ca\") pod \"controller-manager-67c75588d5-4s7xs\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.966622 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-proxy-ca-bundles\") pod \"controller-manager-67c75588d5-4s7xs\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.967892 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f458a7da-8d17-4066-8aca-437fd48f9b7d-serving-cert\") pod \"controller-manager-67c75588d5-4s7xs\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:06 crc kubenswrapper[4707]: I1213 06:59:06.977843 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhzhz\" (UniqueName: \"kubernetes.io/projected/f458a7da-8d17-4066-8aca-437fd48f9b7d-kube-api-access-qhzhz\") pod \"controller-manager-67c75588d5-4s7xs\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:07 crc kubenswrapper[4707]: I1213 06:59:07.100716 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:07 crc kubenswrapper[4707]: I1213 06:59:07.163075 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" event={"ID":"b4a747a4-d415-41e8-be84-6f8763deeb74","Type":"ContainerDied","Data":"20ea8f70f785ba8fdc8c05c52d6c9bb61037ef7a6dd58426f2e79bb4750e393f"} Dec 13 06:59:07 crc kubenswrapper[4707]: I1213 06:59:07.163133 4707 scope.go:117] "RemoveContainer" containerID="995b83c8eab9b8d4136d8f241cc2d2a5401c37a7d029c2fca717623e196aec80" Dec 13 06:59:07 crc kubenswrapper[4707]: I1213 06:59:07.163247 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-557c4f9bc6-88rnx" Dec 13 06:59:07 crc kubenswrapper[4707]: I1213 06:59:07.169546 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" event={"ID":"38b438ee-9579-46f0-95d4-8e29bb14faa4","Type":"ContainerDied","Data":"095386cbfd4c80b29a59101a60f56cb6794352459b5efca0293050273ce0c6ea"} Dec 13 06:59:07 crc kubenswrapper[4707]: I1213 06:59:07.169778 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv" Dec 13 06:59:07 crc kubenswrapper[4707]: I1213 06:59:07.178342 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 13 06:59:07 crc kubenswrapper[4707]: I1213 06:59:07.210282 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-557c4f9bc6-88rnx"] Dec 13 06:59:07 crc kubenswrapper[4707]: I1213 06:59:07.215451 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-557c4f9bc6-88rnx"] Dec 13 06:59:07 crc kubenswrapper[4707]: I1213 06:59:07.222642 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv"] Dec 13 06:59:07 crc kubenswrapper[4707]: I1213 06:59:07.227332 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c58458548-s6qzv"] Dec 13 06:59:07 crc kubenswrapper[4707]: I1213 06:59:07.754540 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b438ee-9579-46f0-95d4-8e29bb14faa4" path="/var/lib/kubelet/pods/38b438ee-9579-46f0-95d4-8e29bb14faa4/volumes" Dec 13 06:59:07 crc kubenswrapper[4707]: I1213 06:59:07.755741 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a747a4-d415-41e8-be84-6f8763deeb74" path="/var/lib/kubelet/pods/b4a747a4-d415-41e8-be84-6f8763deeb74/volumes" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.592061 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh"] Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.593507 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.595625 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.595657 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.595660 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.597836 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.598093 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.598251 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.611876 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh"] Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.719840 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/360de375-debc-486f-8e08-b56507e5b5b1-serving-cert\") pod \"route-controller-manager-7668ff58cf-7czwh\" (UID: \"360de375-debc-486f-8e08-b56507e5b5b1\") " pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.719893 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xhvn\" (UniqueName: \"kubernetes.io/projected/360de375-debc-486f-8e08-b56507e5b5b1-kube-api-access-5xhvn\") pod \"route-controller-manager-7668ff58cf-7czwh\" (UID: \"360de375-debc-486f-8e08-b56507e5b5b1\") " pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.719925 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/360de375-debc-486f-8e08-b56507e5b5b1-config\") pod \"route-controller-manager-7668ff58cf-7czwh\" (UID: \"360de375-debc-486f-8e08-b56507e5b5b1\") " pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.719999 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/360de375-debc-486f-8e08-b56507e5b5b1-client-ca\") pod \"route-controller-manager-7668ff58cf-7czwh\" (UID: \"360de375-debc-486f-8e08-b56507e5b5b1\") " pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.820896 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/360de375-debc-486f-8e08-b56507e5b5b1-serving-cert\") pod \"route-controller-manager-7668ff58cf-7czwh\" (UID: \"360de375-debc-486f-8e08-b56507e5b5b1\") " pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.820975 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xhvn\" (UniqueName: \"kubernetes.io/projected/360de375-debc-486f-8e08-b56507e5b5b1-kube-api-access-5xhvn\") pod \"route-controller-manager-7668ff58cf-7czwh\" (UID: \"360de375-debc-486f-8e08-b56507e5b5b1\") " pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.821194 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/360de375-debc-486f-8e08-b56507e5b5b1-config\") pod \"route-controller-manager-7668ff58cf-7czwh\" (UID: \"360de375-debc-486f-8e08-b56507e5b5b1\") " pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.821253 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/360de375-debc-486f-8e08-b56507e5b5b1-client-ca\") pod \"route-controller-manager-7668ff58cf-7czwh\" (UID: \"360de375-debc-486f-8e08-b56507e5b5b1\") " pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.822444 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/360de375-debc-486f-8e08-b56507e5b5b1-client-ca\") pod \"route-controller-manager-7668ff58cf-7czwh\" (UID: \"360de375-debc-486f-8e08-b56507e5b5b1\") " pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.826239 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/360de375-debc-486f-8e08-b56507e5b5b1-serving-cert\") pod \"route-controller-manager-7668ff58cf-7czwh\" (UID: \"360de375-debc-486f-8e08-b56507e5b5b1\") " pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:11 crc kubenswrapper[4707]: I1213 06:59:11.845838 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xhvn\" (UniqueName: \"kubernetes.io/projected/360de375-debc-486f-8e08-b56507e5b5b1-kube-api-access-5xhvn\") pod \"route-controller-manager-7668ff58cf-7czwh\" (UID: \"360de375-debc-486f-8e08-b56507e5b5b1\") " pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:12 crc kubenswrapper[4707]: I1213 06:59:12.124400 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/360de375-debc-486f-8e08-b56507e5b5b1-config\") pod \"route-controller-manager-7668ff58cf-7czwh\" (UID: \"360de375-debc-486f-8e08-b56507e5b5b1\") " pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:12 crc kubenswrapper[4707]: I1213 06:59:12.212787 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:14 crc kubenswrapper[4707]: I1213 06:59:14.215328 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" event={"ID":"b383ce4b-acbd-4331-9244-27387ddc224a","Type":"ContainerStarted","Data":"c974e2d08a71a052fc5c471b9c33d236617819b56795ff61ebebb97b57be6188"} Dec 13 06:59:17 crc kubenswrapper[4707]: I1213 06:59:17.247713 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-2j5l5" podStartSLOduration=196.24769098 podStartE2EDuration="3m16.24769098s" podCreationTimestamp="2025-12-13 06:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:59:17.245009563 +0000 UTC m=+265.742188640" watchObservedRunningTime="2025-12-13 06:59:17.24769098 +0000 UTC m=+265.744870037" Dec 13 06:59:19 crc kubenswrapper[4707]: E1213 06:59:19.431761 4707 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 13 06:59:19 crc kubenswrapper[4707]: E1213 06:59:19.432213 4707 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jc6r8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gkw57_openshift-marketplace(bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 06:59:19 crc kubenswrapper[4707]: E1213 06:59:19.433495 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gkw57" podUID="bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" Dec 13 06:59:20 crc kubenswrapper[4707]: E1213 06:59:20.322884 4707 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 13 06:59:20 crc kubenswrapper[4707]: E1213 06:59:20.323068 4707 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qrmtn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-t685j_openshift-marketplace(31a88c0f-62fb-4209-8acf-b106c6d03d6e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 06:59:20 crc kubenswrapper[4707]: E1213 06:59:20.325153 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-t685j" podUID="31a88c0f-62fb-4209-8acf-b106c6d03d6e" Dec 13 06:59:26 crc kubenswrapper[4707]: E1213 06:59:26.499752 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gkw57" podUID="bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" Dec 13 06:59:26 crc kubenswrapper[4707]: E1213 06:59:26.499770 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-t685j" podUID="31a88c0f-62fb-4209-8acf-b106c6d03d6e" Dec 13 06:59:26 crc kubenswrapper[4707]: I1213 06:59:26.534174 4707 scope.go:117] "RemoveContainer" containerID="aec5a5a3dbb6a4a5ddccbf45bef477f6cd6e41cabdbe48e9df85b1e37965770a" Dec 13 06:59:26 crc kubenswrapper[4707]: E1213 06:59:26.544428 4707 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 13 06:59:26 crc kubenswrapper[4707]: E1213 06:59:26.544594 4707 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-562nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rhlhc_openshift-marketplace(5a4673eb-3880-4df1-b8d3-95f31c7af7dd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 06:59:26 crc kubenswrapper[4707]: E1213 06:59:26.548365 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rhlhc" podUID="5a4673eb-3880-4df1-b8d3-95f31c7af7dd" Dec 13 06:59:26 crc kubenswrapper[4707]: E1213 06:59:26.573882 4707 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 13 06:59:26 crc kubenswrapper[4707]: E1213 06:59:26.574100 4707 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2cbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-z47zb_openshift-marketplace(1c15410b-e411-44d7-8663-d19dfc2a76b6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 13 06:59:26 crc kubenswrapper[4707]: E1213 06:59:26.575566 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-z47zb" podUID="1c15410b-e411-44d7-8663-d19dfc2a76b6" Dec 13 06:59:26 crc kubenswrapper[4707]: I1213 06:59:26.895931 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67c75588d5-4s7xs"] Dec 13 06:59:26 crc kubenswrapper[4707]: I1213 06:59:26.970890 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh"] Dec 13 06:59:26 crc kubenswrapper[4707]: W1213 06:59:26.997800 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf458a7da_8d17_4066_8aca_437fd48f9b7d.slice/crio-9d9c4810d57aa1d810e826da026364fbecdd51316d87b6859b966009803c3c27 WatchSource:0}: Error finding container 9d9c4810d57aa1d810e826da026364fbecdd51316d87b6859b966009803c3c27: Status 404 returned error can't find the container with id 9d9c4810d57aa1d810e826da026364fbecdd51316d87b6859b966009803c3c27 Dec 13 06:59:26 crc kubenswrapper[4707]: W1213 06:59:26.998028 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod360de375_debc_486f_8e08_b56507e5b5b1.slice/crio-c04453240f1722eeddae070421eb46d8b472071def9c7b28abc07a18c17d990c WatchSource:0}: Error finding container c04453240f1722eeddae070421eb46d8b472071def9c7b28abc07a18c17d990c: Status 404 returned error can't find the container with id c04453240f1722eeddae070421eb46d8b472071def9c7b28abc07a18c17d990c Dec 13 06:59:27 crc kubenswrapper[4707]: I1213 06:59:27.280588 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" event={"ID":"360de375-debc-486f-8e08-b56507e5b5b1","Type":"ContainerStarted","Data":"c04453240f1722eeddae070421eb46d8b472071def9c7b28abc07a18c17d990c"} Dec 13 06:59:27 crc kubenswrapper[4707]: I1213 06:59:27.287748 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" event={"ID":"f458a7da-8d17-4066-8aca-437fd48f9b7d","Type":"ContainerStarted","Data":"9d9c4810d57aa1d810e826da026364fbecdd51316d87b6859b966009803c3c27"} Dec 13 06:59:27 crc kubenswrapper[4707]: I1213 06:59:27.292729 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" event={"ID":"5d5a779a-7592-4da7-950b-3a0679d7a4bf","Type":"ContainerStarted","Data":"f95c4d54b8af173db296a2e5d0f55ada85a6b3bff3ad85c18367cf3d7c82b54d"} Dec 13 06:59:27 crc kubenswrapper[4707]: I1213 06:59:27.299221 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4e6767a5-947b-454d-80b7-585d10aebb4f","Type":"ContainerStarted","Data":"6f3bee4dba2652058c10ccf7fa59ef8084effe4b550ab56a44a0747c9c146662"} Dec 13 06:59:27 crc kubenswrapper[4707]: I1213 06:59:27.299289 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4e6767a5-947b-454d-80b7-585d10aebb4f","Type":"ContainerStarted","Data":"fe4394ce7b9c029ecda2e11e19c8f27321cc700f54a9cd2ed11a6301ddfe8649"} Dec 13 06:59:27 crc kubenswrapper[4707]: I1213 06:59:27.303093 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"87065476-d5a8-4630-883d-60d255622c72","Type":"ContainerStarted","Data":"2c40c1ea060f5e18e64dd0b816bcbe6ae346ee72ccfa037ee065b5cbf4f81a1d"} Dec 13 06:59:27 crc kubenswrapper[4707]: I1213 06:59:27.303161 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"87065476-d5a8-4630-883d-60d255622c72","Type":"ContainerStarted","Data":"f316709bf69576344f3456275be47cb3beedfde9a4b4f1a17cee7950f3972360"} Dec 13 06:59:27 crc kubenswrapper[4707]: E1213 06:59:27.312435 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-z47zb" podUID="1c15410b-e411-44d7-8663-d19dfc2a76b6" Dec 13 06:59:27 crc kubenswrapper[4707]: E1213 06:59:27.335660 4707 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rhlhc" podUID="5a4673eb-3880-4df1-b8d3-95f31c7af7dd" Dec 13 06:59:27 crc kubenswrapper[4707]: I1213 06:59:27.373854 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=150.373838429 podStartE2EDuration="2m30.373838429s" podCreationTimestamp="2025-12-13 06:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:59:27.34813573 +0000 UTC m=+275.845314777" watchObservedRunningTime="2025-12-13 06:59:27.373838429 +0000 UTC m=+275.871017476" Dec 13 06:59:27 crc kubenswrapper[4707]: I1213 06:59:27.418309 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=155.418286803 podStartE2EDuration="2m35.418286803s" podCreationTimestamp="2025-12-13 06:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:59:27.416106109 +0000 UTC m=+275.913285166" watchObservedRunningTime="2025-12-13 06:59:27.418286803 +0000 UTC m=+275.915465850" Dec 13 06:59:28 crc kubenswrapper[4707]: I1213 06:59:28.317050 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2t9v8" event={"ID":"6f152c64-57be-4ac0-8422-d5d504f02cf8","Type":"ContainerStarted","Data":"48f3cb2ac7ed0f66b4f8ef374fd6728ce0df723f61b61882489526a1ad704843"} Dec 13 06:59:28 crc kubenswrapper[4707]: I1213 06:59:28.318908 4707 generic.go:334] "Generic (PLEG): container finished" podID="90f36b9c-fa8d-4257-ad64-2ee8a487d037" containerID="2ce12c25962897782ca0c72f1262345b68d81b1020c28e278cb65f291e03d2d7" exitCode=0 Dec 13 06:59:28 crc kubenswrapper[4707]: I1213 06:59:28.319048 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwcjf" event={"ID":"90f36b9c-fa8d-4257-ad64-2ee8a487d037","Type":"ContainerDied","Data":"2ce12c25962897782ca0c72f1262345b68d81b1020c28e278cb65f291e03d2d7"} Dec 13 06:59:28 crc kubenswrapper[4707]: I1213 06:59:28.321910 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" event={"ID":"360de375-debc-486f-8e08-b56507e5b5b1","Type":"ContainerStarted","Data":"13ace93c81122b4f486ad9e6f3c8d0e171ace46333b74d5431f31fa6450d8197"} Dec 13 06:59:28 crc kubenswrapper[4707]: I1213 06:59:28.322241 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:28 crc kubenswrapper[4707]: I1213 06:59:28.324092 4707 generic.go:334] "Generic (PLEG): container finished" podID="fb1a1a41-b86e-4de7-af57-7862986b3037" containerID="c4af05592af84e828bda5bb4f83e16446f08f45301ec7707067f764a4283e889" exitCode=0 Dec 13 06:59:28 crc kubenswrapper[4707]: I1213 06:59:28.324137 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75wd8" event={"ID":"fb1a1a41-b86e-4de7-af57-7862986b3037","Type":"ContainerDied","Data":"c4af05592af84e828bda5bb4f83e16446f08f45301ec7707067f764a4283e889"} Dec 13 06:59:28 crc kubenswrapper[4707]: I1213 06:59:28.326159 4707 generic.go:334] "Generic (PLEG): container finished" podID="87065476-d5a8-4630-883d-60d255622c72" containerID="2c40c1ea060f5e18e64dd0b816bcbe6ae346ee72ccfa037ee065b5cbf4f81a1d" exitCode=0 Dec 13 06:59:28 crc kubenswrapper[4707]: I1213 06:59:28.326202 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"87065476-d5a8-4630-883d-60d255622c72","Type":"ContainerDied","Data":"2c40c1ea060f5e18e64dd0b816bcbe6ae346ee72ccfa037ee065b5cbf4f81a1d"} Dec 13 06:59:28 crc kubenswrapper[4707]: I1213 06:59:28.333430 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" event={"ID":"f458a7da-8d17-4066-8aca-437fd48f9b7d","Type":"ContainerStarted","Data":"c208e17745223669dde7f5f4aa6f32ec0b26a0d73081c4f6f83f1ec134b85e72"} Dec 13 06:59:28 crc kubenswrapper[4707]: I1213 06:59:28.335631 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5s7zh" event={"ID":"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb","Type":"ContainerStarted","Data":"e6d26fcd666db1c56ff9afbdd09446510f088f76551c53b1268c5b903fc57fa3"} Dec 13 06:59:28 crc kubenswrapper[4707]: I1213 06:59:28.337398 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:28 crc kubenswrapper[4707]: I1213 06:59:28.455341 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" podStartSLOduration=158.455316957 podStartE2EDuration="2m38.455316957s" podCreationTimestamp="2025-12-13 06:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:59:28.454527867 +0000 UTC m=+276.951706914" watchObservedRunningTime="2025-12-13 06:59:28.455316957 +0000 UTC m=+276.952496004" Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.348158 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwcjf" event={"ID":"90f36b9c-fa8d-4257-ad64-2ee8a487d037","Type":"ContainerStarted","Data":"8b5db6db4f63687ecc0a9299625a473847c08d3ac48294bf71473cfe83498482"} Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.353499 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75wd8" event={"ID":"fb1a1a41-b86e-4de7-af57-7862986b3037","Type":"ContainerStarted","Data":"c0981413270f046893c18cc80ebbf9c9ead57f894dbb47761c367f2b49727538"} Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.358042 4707 generic.go:334] "Generic (PLEG): container finished" podID="0291ce56-de87-4ca7-b7d1-8b4b68de5cbb" containerID="e6d26fcd666db1c56ff9afbdd09446510f088f76551c53b1268c5b903fc57fa3" exitCode=0 Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.358217 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5s7zh" event={"ID":"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb","Type":"ContainerDied","Data":"e6d26fcd666db1c56ff9afbdd09446510f088f76551c53b1268c5b903fc57fa3"} Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.370057 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cwcjf" podStartSLOduration=24.368700154 podStartE2EDuration="3m13.370038283s" podCreationTimestamp="2025-12-13 06:56:16 +0000 UTC" firstStartedPulling="2025-12-13 06:56:40.021353264 +0000 UTC m=+108.518532321" lastFinishedPulling="2025-12-13 06:59:29.022691403 +0000 UTC m=+277.519870450" observedRunningTime="2025-12-13 06:59:29.368936565 +0000 UTC m=+277.866115612" watchObservedRunningTime="2025-12-13 06:59:29.370038283 +0000 UTC m=+277.867217330" Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.370395 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" podStartSLOduration=159.370388702 podStartE2EDuration="2m39.370388702s" podCreationTimestamp="2025-12-13 06:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:59:28.518977048 +0000 UTC m=+277.016156095" watchObservedRunningTime="2025-12-13 06:59:29.370388702 +0000 UTC m=+277.867567749" Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.373009 4707 generic.go:334] "Generic (PLEG): container finished" podID="6f152c64-57be-4ac0-8422-d5d504f02cf8" containerID="48f3cb2ac7ed0f66b4f8ef374fd6728ce0df723f61b61882489526a1ad704843" exitCode=0 Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.373144 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2t9v8" event={"ID":"6f152c64-57be-4ac0-8422-d5d504f02cf8","Type":"ContainerDied","Data":"48f3cb2ac7ed0f66b4f8ef374fd6728ce0df723f61b61882489526a1ad704843"} Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.373919 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.381307 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.392688 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-75wd8" podStartSLOduration=24.327225564 podStartE2EDuration="3m13.392672065s" podCreationTimestamp="2025-12-13 06:56:16 +0000 UTC" firstStartedPulling="2025-12-13 06:56:40.020553495 +0000 UTC m=+108.517732542" lastFinishedPulling="2025-12-13 06:59:29.085999996 +0000 UTC m=+277.583179043" observedRunningTime="2025-12-13 06:59:29.390760348 +0000 UTC m=+277.887939395" watchObservedRunningTime="2025-12-13 06:59:29.392672065 +0000 UTC m=+277.889851112" Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.812783 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.953357 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87065476-d5a8-4630-883d-60d255622c72-kubelet-dir\") pod \"87065476-d5a8-4630-883d-60d255622c72\" (UID: \"87065476-d5a8-4630-883d-60d255622c72\") " Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.953406 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87065476-d5a8-4630-883d-60d255622c72-kube-api-access\") pod \"87065476-d5a8-4630-883d-60d255622c72\" (UID: \"87065476-d5a8-4630-883d-60d255622c72\") " Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.953693 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87065476-d5a8-4630-883d-60d255622c72-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "87065476-d5a8-4630-883d-60d255622c72" (UID: "87065476-d5a8-4630-883d-60d255622c72"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.953888 4707 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87065476-d5a8-4630-883d-60d255622c72-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:29 crc kubenswrapper[4707]: I1213 06:59:29.961442 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87065476-d5a8-4630-883d-60d255622c72-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "87065476-d5a8-4630-883d-60d255622c72" (UID: "87065476-d5a8-4630-883d-60d255622c72"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:59:30 crc kubenswrapper[4707]: I1213 06:59:30.055420 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87065476-d5a8-4630-883d-60d255622c72-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:30 crc kubenswrapper[4707]: I1213 06:59:30.381178 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5s7zh" event={"ID":"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb","Type":"ContainerStarted","Data":"6466583ddab57be8008355d729cc84e3f6eadb5ce3a334e2a038b13b0bff0740"} Dec 13 06:59:30 crc kubenswrapper[4707]: I1213 06:59:30.384766 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2t9v8" event={"ID":"6f152c64-57be-4ac0-8422-d5d504f02cf8","Type":"ContainerStarted","Data":"1d73b24189b933fc66144a3b1daca27fbbe37c82919178579150b2e861e18ed8"} Dec 13 06:59:30 crc kubenswrapper[4707]: I1213 06:59:30.386353 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"87065476-d5a8-4630-883d-60d255622c72","Type":"ContainerDied","Data":"f316709bf69576344f3456275be47cb3beedfde9a4b4f1a17cee7950f3972360"} Dec 13 06:59:30 crc kubenswrapper[4707]: I1213 06:59:30.386411 4707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f316709bf69576344f3456275be47cb3beedfde9a4b4f1a17cee7950f3972360" Dec 13 06:59:30 crc kubenswrapper[4707]: I1213 06:59:30.386485 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 13 06:59:30 crc kubenswrapper[4707]: I1213 06:59:30.407776 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5s7zh" podStartSLOduration=23.481748853 podStartE2EDuration="3m13.407756575s" podCreationTimestamp="2025-12-13 06:56:17 +0000 UTC" firstStartedPulling="2025-12-13 06:56:40.021258442 +0000 UTC m=+108.518437489" lastFinishedPulling="2025-12-13 06:59:29.947266154 +0000 UTC m=+278.444445211" observedRunningTime="2025-12-13 06:59:30.403065538 +0000 UTC m=+278.900244585" watchObservedRunningTime="2025-12-13 06:59:30.407756575 +0000 UTC m=+278.904935632" Dec 13 06:59:30 crc kubenswrapper[4707]: I1213 06:59:30.441553 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2t9v8" podStartSLOduration=23.419830745 podStartE2EDuration="3m13.441536744s" podCreationTimestamp="2025-12-13 06:56:17 +0000 UTC" firstStartedPulling="2025-12-13 06:56:40.020689788 +0000 UTC m=+108.517868835" lastFinishedPulling="2025-12-13 06:59:30.042395787 +0000 UTC m=+278.539574834" observedRunningTime="2025-12-13 06:59:30.439348119 +0000 UTC m=+278.936527196" watchObservedRunningTime="2025-12-13 06:59:30.441536744 +0000 UTC m=+278.938715791" Dec 13 06:59:36 crc kubenswrapper[4707]: I1213 06:59:36.612667 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 06:59:36 crc kubenswrapper[4707]: I1213 06:59:36.613256 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 06:59:36 crc kubenswrapper[4707]: I1213 06:59:36.825198 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 06:59:36 crc kubenswrapper[4707]: I1213 06:59:36.987695 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:59:36 crc kubenswrapper[4707]: I1213 06:59:36.987739 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:59:37 crc kubenswrapper[4707]: I1213 06:59:37.028686 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:59:37 crc kubenswrapper[4707]: I1213 06:59:37.471799 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 06:59:37 crc kubenswrapper[4707]: I1213 06:59:37.487170 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:59:37 crc kubenswrapper[4707]: I1213 06:59:37.580854 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 06:59:37 crc kubenswrapper[4707]: I1213 06:59:37.580921 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 06:59:37 crc kubenswrapper[4707]: I1213 06:59:37.626646 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 06:59:38 crc kubenswrapper[4707]: I1213 06:59:38.017907 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:59:38 crc kubenswrapper[4707]: I1213 06:59:38.017983 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:59:38 crc kubenswrapper[4707]: I1213 06:59:38.056943 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:59:38 crc kubenswrapper[4707]: I1213 06:59:38.464241 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwcjf"] Dec 13 06:59:38 crc kubenswrapper[4707]: I1213 06:59:38.472051 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:59:38 crc kubenswrapper[4707]: I1213 06:59:38.473393 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 06:59:39 crc kubenswrapper[4707]: I1213 06:59:39.439760 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cwcjf" podUID="90f36b9c-fa8d-4257-ad64-2ee8a487d037" containerName="registry-server" containerID="cri-o://8b5db6db4f63687ecc0a9299625a473847c08d3ac48294bf71473cfe83498482" gracePeriod=2 Dec 13 06:59:39 crc kubenswrapper[4707]: I1213 06:59:39.861797 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5s7zh"] Dec 13 06:59:40 crc kubenswrapper[4707]: I1213 06:59:40.448356 4707 generic.go:334] "Generic (PLEG): container finished" podID="90f36b9c-fa8d-4257-ad64-2ee8a487d037" containerID="8b5db6db4f63687ecc0a9299625a473847c08d3ac48294bf71473cfe83498482" exitCode=0 Dec 13 06:59:40 crc kubenswrapper[4707]: I1213 06:59:40.448441 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwcjf" event={"ID":"90f36b9c-fa8d-4257-ad64-2ee8a487d037","Type":"ContainerDied","Data":"8b5db6db4f63687ecc0a9299625a473847c08d3ac48294bf71473cfe83498482"} Dec 13 06:59:40 crc kubenswrapper[4707]: I1213 06:59:40.448586 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5s7zh" podUID="0291ce56-de87-4ca7-b7d1-8b4b68de5cbb" containerName="registry-server" containerID="cri-o://6466583ddab57be8008355d729cc84e3f6eadb5ce3a334e2a038b13b0bff0740" gracePeriod=2 Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.435167 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.463188 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwcjf" event={"ID":"90f36b9c-fa8d-4257-ad64-2ee8a487d037","Type":"ContainerDied","Data":"97e2590a08b5c429793341df474df724462609f24c324a54345509a7d33c316b"} Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.463242 4707 scope.go:117] "RemoveContainer" containerID="8b5db6db4f63687ecc0a9299625a473847c08d3ac48294bf71473cfe83498482" Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.463253 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwcjf" Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.478896 4707 scope.go:117] "RemoveContainer" containerID="2ce12c25962897782ca0c72f1262345b68d81b1020c28e278cb65f291e03d2d7" Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.494471 4707 scope.go:117] "RemoveContainer" containerID="788c9ba865a026fc932837a0d7211cc5afe28cd3edf1944b883939b13321fe2f" Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.605804 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f36b9c-fa8d-4257-ad64-2ee8a487d037-utilities\") pod \"90f36b9c-fa8d-4257-ad64-2ee8a487d037\" (UID: \"90f36b9c-fa8d-4257-ad64-2ee8a487d037\") " Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.605855 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dxwl\" (UniqueName: \"kubernetes.io/projected/90f36b9c-fa8d-4257-ad64-2ee8a487d037-kube-api-access-5dxwl\") pod \"90f36b9c-fa8d-4257-ad64-2ee8a487d037\" (UID: \"90f36b9c-fa8d-4257-ad64-2ee8a487d037\") " Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.605873 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f36b9c-fa8d-4257-ad64-2ee8a487d037-catalog-content\") pod \"90f36b9c-fa8d-4257-ad64-2ee8a487d037\" (UID: \"90f36b9c-fa8d-4257-ad64-2ee8a487d037\") " Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.606618 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90f36b9c-fa8d-4257-ad64-2ee8a487d037-utilities" (OuterVolumeSpecName: "utilities") pod "90f36b9c-fa8d-4257-ad64-2ee8a487d037" (UID: "90f36b9c-fa8d-4257-ad64-2ee8a487d037"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.613838 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90f36b9c-fa8d-4257-ad64-2ee8a487d037-kube-api-access-5dxwl" (OuterVolumeSpecName: "kube-api-access-5dxwl") pod "90f36b9c-fa8d-4257-ad64-2ee8a487d037" (UID: "90f36b9c-fa8d-4257-ad64-2ee8a487d037"). InnerVolumeSpecName "kube-api-access-5dxwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.628440 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90f36b9c-fa8d-4257-ad64-2ee8a487d037-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90f36b9c-fa8d-4257-ad64-2ee8a487d037" (UID: "90f36b9c-fa8d-4257-ad64-2ee8a487d037"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.706972 4707 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f36b9c-fa8d-4257-ad64-2ee8a487d037-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.707005 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dxwl\" (UniqueName: \"kubernetes.io/projected/90f36b9c-fa8d-4257-ad64-2ee8a487d037-kube-api-access-5dxwl\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.707016 4707 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f36b9c-fa8d-4257-ad64-2ee8a487d037-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.804207 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwcjf"] Dec 13 06:59:41 crc kubenswrapper[4707]: I1213 06:59:41.809264 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwcjf"] Dec 13 06:59:42 crc kubenswrapper[4707]: I1213 06:59:42.481597 4707 generic.go:334] "Generic (PLEG): container finished" podID="bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" containerID="a1d4d085b4ff9d8eb25bf5804607605b1d3480bcfb7bcb8a44af411f571a8a7d" exitCode=0 Dec 13 06:59:42 crc kubenswrapper[4707]: I1213 06:59:42.481797 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkw57" event={"ID":"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64","Type":"ContainerDied","Data":"a1d4d085b4ff9d8eb25bf5804607605b1d3480bcfb7bcb8a44af411f571a8a7d"} Dec 13 06:59:42 crc kubenswrapper[4707]: I1213 06:59:42.490133 4707 generic.go:334] "Generic (PLEG): container finished" podID="0291ce56-de87-4ca7-b7d1-8b4b68de5cbb" containerID="6466583ddab57be8008355d729cc84e3f6eadb5ce3a334e2a038b13b0bff0740" exitCode=0 Dec 13 06:59:42 crc kubenswrapper[4707]: I1213 06:59:42.490245 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5s7zh" event={"ID":"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb","Type":"ContainerDied","Data":"6466583ddab57be8008355d729cc84e3f6eadb5ce3a334e2a038b13b0bff0740"} Dec 13 06:59:42 crc kubenswrapper[4707]: I1213 06:59:42.620584 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:59:42 crc kubenswrapper[4707]: I1213 06:59:42.782037 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blslw\" (UniqueName: \"kubernetes.io/projected/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-kube-api-access-blslw\") pod \"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb\" (UID: \"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb\") " Dec 13 06:59:42 crc kubenswrapper[4707]: I1213 06:59:42.782869 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-catalog-content\") pod \"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb\" (UID: \"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb\") " Dec 13 06:59:42 crc kubenswrapper[4707]: I1213 06:59:42.785206 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-utilities\") pod \"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb\" (UID: \"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb\") " Dec 13 06:59:42 crc kubenswrapper[4707]: I1213 06:59:42.786658 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-utilities" (OuterVolumeSpecName: "utilities") pod "0291ce56-de87-4ca7-b7d1-8b4b68de5cbb" (UID: "0291ce56-de87-4ca7-b7d1-8b4b68de5cbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 06:59:42 crc kubenswrapper[4707]: I1213 06:59:42.786813 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-kube-api-access-blslw" (OuterVolumeSpecName: "kube-api-access-blslw") pod "0291ce56-de87-4ca7-b7d1-8b4b68de5cbb" (UID: "0291ce56-de87-4ca7-b7d1-8b4b68de5cbb"). InnerVolumeSpecName "kube-api-access-blslw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:59:42 crc kubenswrapper[4707]: I1213 06:59:42.889179 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blslw\" (UniqueName: \"kubernetes.io/projected/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-kube-api-access-blslw\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:42 crc kubenswrapper[4707]: I1213 06:59:42.896078 4707 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:43 crc kubenswrapper[4707]: I1213 06:59:43.496507 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5s7zh" event={"ID":"0291ce56-de87-4ca7-b7d1-8b4b68de5cbb","Type":"ContainerDied","Data":"dd312fd21d095757bb7d89e1f9ce95fa5d52accefdd525df71faf27dde63378b"} Dec 13 06:59:43 crc kubenswrapper[4707]: I1213 06:59:43.496576 4707 scope.go:117] "RemoveContainer" containerID="6466583ddab57be8008355d729cc84e3f6eadb5ce3a334e2a038b13b0bff0740" Dec 13 06:59:43 crc kubenswrapper[4707]: I1213 06:59:43.497982 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5s7zh" Dec 13 06:59:43 crc kubenswrapper[4707]: I1213 06:59:43.512935 4707 scope.go:117] "RemoveContainer" containerID="e6d26fcd666db1c56ff9afbdd09446510f088f76551c53b1268c5b903fc57fa3" Dec 13 06:59:43 crc kubenswrapper[4707]: I1213 06:59:43.521644 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0291ce56-de87-4ca7-b7d1-8b4b68de5cbb" (UID: "0291ce56-de87-4ca7-b7d1-8b4b68de5cbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 06:59:43 crc kubenswrapper[4707]: I1213 06:59:43.535543 4707 scope.go:117] "RemoveContainer" containerID="90004ab6611fcc2477b99239365cd3c417d9cb8efdac15b5eb7f204d5185639e" Dec 13 06:59:43 crc kubenswrapper[4707]: I1213 06:59:43.608322 4707 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:43 crc kubenswrapper[4707]: I1213 06:59:43.751985 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90f36b9c-fa8d-4257-ad64-2ee8a487d037" path="/var/lib/kubelet/pods/90f36b9c-fa8d-4257-ad64-2ee8a487d037/volumes" Dec 13 06:59:43 crc kubenswrapper[4707]: I1213 06:59:43.816436 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5s7zh"] Dec 13 06:59:43 crc kubenswrapper[4707]: I1213 06:59:43.820912 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5s7zh"] Dec 13 06:59:45 crc kubenswrapper[4707]: I1213 06:59:45.750963 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0291ce56-de87-4ca7-b7d1-8b4b68de5cbb" path="/var/lib/kubelet/pods/0291ce56-de87-4ca7-b7d1-8b4b68de5cbb/volumes" Dec 13 06:59:47 crc kubenswrapper[4707]: I1213 06:59:47.517446 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t685j" event={"ID":"31a88c0f-62fb-4209-8acf-b106c6d03d6e","Type":"ContainerStarted","Data":"714dff0439ad62d0b62818cc2a0920872ebc00889ffa109b6b8bd778cf5439e7"} Dec 13 06:59:47 crc kubenswrapper[4707]: I1213 06:59:47.521583 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhlhc" event={"ID":"5a4673eb-3880-4df1-b8d3-95f31c7af7dd","Type":"ContainerStarted","Data":"03d1ceb967184c7f2e3a37fc8de32e61f90402240531df73fb8b5525770ba09f"} Dec 13 06:59:47 crc kubenswrapper[4707]: I1213 06:59:47.522872 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z47zb" event={"ID":"1c15410b-e411-44d7-8663-d19dfc2a76b6","Type":"ContainerStarted","Data":"38e1dff808f8ab5ef00855b72d449ffa0ec109a306d3a31278cb3028f9312210"} Dec 13 06:59:47 crc kubenswrapper[4707]: I1213 06:59:47.525372 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkw57" event={"ID":"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64","Type":"ContainerStarted","Data":"9822868977e799d309d5a03935630365c95ae8b2d7d3dcc70afd941b3bed2207"} Dec 13 06:59:47 crc kubenswrapper[4707]: I1213 06:59:47.595590 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gkw57" podStartSLOduration=27.598179686 podStartE2EDuration="3m34.595569356s" podCreationTimestamp="2025-12-13 06:56:13 +0000 UTC" firstStartedPulling="2025-12-13 06:56:40.020244858 +0000 UTC m=+108.517423905" lastFinishedPulling="2025-12-13 06:59:47.017634528 +0000 UTC m=+295.514813575" observedRunningTime="2025-12-13 06:59:47.592587022 +0000 UTC m=+296.089766069" watchObservedRunningTime="2025-12-13 06:59:47.595569356 +0000 UTC m=+296.092748403" Dec 13 06:59:48 crc kubenswrapper[4707]: I1213 06:59:48.532680 4707 generic.go:334] "Generic (PLEG): container finished" podID="31a88c0f-62fb-4209-8acf-b106c6d03d6e" containerID="714dff0439ad62d0b62818cc2a0920872ebc00889ffa109b6b8bd778cf5439e7" exitCode=0 Dec 13 06:59:48 crc kubenswrapper[4707]: I1213 06:59:48.532746 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t685j" event={"ID":"31a88c0f-62fb-4209-8acf-b106c6d03d6e","Type":"ContainerDied","Data":"714dff0439ad62d0b62818cc2a0920872ebc00889ffa109b6b8bd778cf5439e7"} Dec 13 06:59:48 crc kubenswrapper[4707]: I1213 06:59:48.534685 4707 generic.go:334] "Generic (PLEG): container finished" podID="1c15410b-e411-44d7-8663-d19dfc2a76b6" containerID="38e1dff808f8ab5ef00855b72d449ffa0ec109a306d3a31278cb3028f9312210" exitCode=0 Dec 13 06:59:48 crc kubenswrapper[4707]: I1213 06:59:48.534736 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z47zb" event={"ID":"1c15410b-e411-44d7-8663-d19dfc2a76b6","Type":"ContainerDied","Data":"38e1dff808f8ab5ef00855b72d449ffa0ec109a306d3a31278cb3028f9312210"} Dec 13 06:59:48 crc kubenswrapper[4707]: I1213 06:59:48.537462 4707 generic.go:334] "Generic (PLEG): container finished" podID="5a4673eb-3880-4df1-b8d3-95f31c7af7dd" containerID="03d1ceb967184c7f2e3a37fc8de32e61f90402240531df73fb8b5525770ba09f" exitCode=0 Dec 13 06:59:48 crc kubenswrapper[4707]: I1213 06:59:48.537485 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhlhc" event={"ID":"5a4673eb-3880-4df1-b8d3-95f31c7af7dd","Type":"ContainerDied","Data":"03d1ceb967184c7f2e3a37fc8de32e61f90402240531df73fb8b5525770ba09f"} Dec 13 06:59:48 crc kubenswrapper[4707]: I1213 06:59:48.682995 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67c75588d5-4s7xs"] Dec 13 06:59:48 crc kubenswrapper[4707]: I1213 06:59:48.683255 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" podUID="f458a7da-8d17-4066-8aca-437fd48f9b7d" containerName="controller-manager" containerID="cri-o://c208e17745223669dde7f5f4aa6f32ec0b26a0d73081c4f6f83f1ec134b85e72" gracePeriod=30 Dec 13 06:59:48 crc kubenswrapper[4707]: I1213 06:59:48.783989 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh"] Dec 13 06:59:48 crc kubenswrapper[4707]: I1213 06:59:48.784255 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" podUID="360de375-debc-486f-8e08-b56507e5b5b1" containerName="route-controller-manager" containerID="cri-o://13ace93c81122b4f486ad9e6f3c8d0e171ace46333b74d5431f31fa6450d8197" gracePeriod=30 Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.544042 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z47zb" event={"ID":"1c15410b-e411-44d7-8663-d19dfc2a76b6","Type":"ContainerStarted","Data":"31385c6b5afaf3726b94fef5381c862e94150b8441e088e2ae505ac0ab02f73c"} Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.552860 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhlhc" event={"ID":"5a4673eb-3880-4df1-b8d3-95f31c7af7dd","Type":"ContainerStarted","Data":"dad91e21a401e5a21ec708b04a46557cf7d33536f77712c922196f2f04666c94"} Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.559066 4707 generic.go:334] "Generic (PLEG): container finished" podID="360de375-debc-486f-8e08-b56507e5b5b1" containerID="13ace93c81122b4f486ad9e6f3c8d0e171ace46333b74d5431f31fa6450d8197" exitCode=0 Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.559124 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" event={"ID":"360de375-debc-486f-8e08-b56507e5b5b1","Type":"ContainerDied","Data":"13ace93c81122b4f486ad9e6f3c8d0e171ace46333b74d5431f31fa6450d8197"} Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.563469 4707 generic.go:334] "Generic (PLEG): container finished" podID="f458a7da-8d17-4066-8aca-437fd48f9b7d" containerID="c208e17745223669dde7f5f4aa6f32ec0b26a0d73081c4f6f83f1ec134b85e72" exitCode=0 Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.563519 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" event={"ID":"f458a7da-8d17-4066-8aca-437fd48f9b7d","Type":"ContainerDied","Data":"c208e17745223669dde7f5f4aa6f32ec0b26a0d73081c4f6f83f1ec134b85e72"} Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.575141 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z47zb" podStartSLOduration=26.394549756 podStartE2EDuration="3m35.575125328s" podCreationTimestamp="2025-12-13 06:56:14 +0000 UTC" firstStartedPulling="2025-12-13 06:56:40.021663672 +0000 UTC m=+108.518842719" lastFinishedPulling="2025-12-13 06:59:49.202239244 +0000 UTC m=+297.699418291" observedRunningTime="2025-12-13 06:59:49.573318343 +0000 UTC m=+298.070497400" watchObservedRunningTime="2025-12-13 06:59:49.575125328 +0000 UTC m=+298.072304385" Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.575702 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t685j" event={"ID":"31a88c0f-62fb-4209-8acf-b106c6d03d6e","Type":"ContainerStarted","Data":"c582b2bbb37344f19bfd1c6dcc9fe5aa1eb2afc6e591d38a9e88e47bb36b3924"} Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.603081 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rhlhc" podStartSLOduration=26.569208818 podStartE2EDuration="3m35.603062922s" podCreationTimestamp="2025-12-13 06:56:14 +0000 UTC" firstStartedPulling="2025-12-13 06:56:40.018163478 +0000 UTC m=+108.515342525" lastFinishedPulling="2025-12-13 06:59:49.052017582 +0000 UTC m=+297.549196629" observedRunningTime="2025-12-13 06:59:49.601744199 +0000 UTC m=+298.098923246" watchObservedRunningTime="2025-12-13 06:59:49.603062922 +0000 UTC m=+298.100241979" Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.623614 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t685j" podStartSLOduration=21.725726343 podStartE2EDuration="3m35.623595972s" podCreationTimestamp="2025-12-13 06:56:14 +0000 UTC" firstStartedPulling="2025-12-13 06:56:35.233280469 +0000 UTC m=+103.730459526" lastFinishedPulling="2025-12-13 06:59:49.131150108 +0000 UTC m=+297.628329155" observedRunningTime="2025-12-13 06:59:49.620486755 +0000 UTC m=+298.117665802" watchObservedRunningTime="2025-12-13 06:59:49.623595972 +0000 UTC m=+298.120775019" Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.755759 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.882101 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/360de375-debc-486f-8e08-b56507e5b5b1-config\") pod \"360de375-debc-486f-8e08-b56507e5b5b1\" (UID: \"360de375-debc-486f-8e08-b56507e5b5b1\") " Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.882178 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/360de375-debc-486f-8e08-b56507e5b5b1-serving-cert\") pod \"360de375-debc-486f-8e08-b56507e5b5b1\" (UID: \"360de375-debc-486f-8e08-b56507e5b5b1\") " Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.882211 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xhvn\" (UniqueName: \"kubernetes.io/projected/360de375-debc-486f-8e08-b56507e5b5b1-kube-api-access-5xhvn\") pod \"360de375-debc-486f-8e08-b56507e5b5b1\" (UID: \"360de375-debc-486f-8e08-b56507e5b5b1\") " Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.882245 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/360de375-debc-486f-8e08-b56507e5b5b1-client-ca\") pod \"360de375-debc-486f-8e08-b56507e5b5b1\" (UID: \"360de375-debc-486f-8e08-b56507e5b5b1\") " Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.883265 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/360de375-debc-486f-8e08-b56507e5b5b1-client-ca" (OuterVolumeSpecName: "client-ca") pod "360de375-debc-486f-8e08-b56507e5b5b1" (UID: "360de375-debc-486f-8e08-b56507e5b5b1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.883686 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/360de375-debc-486f-8e08-b56507e5b5b1-config" (OuterVolumeSpecName: "config") pod "360de375-debc-486f-8e08-b56507e5b5b1" (UID: "360de375-debc-486f-8e08-b56507e5b5b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.888656 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/360de375-debc-486f-8e08-b56507e5b5b1-kube-api-access-5xhvn" (OuterVolumeSpecName: "kube-api-access-5xhvn") pod "360de375-debc-486f-8e08-b56507e5b5b1" (UID: "360de375-debc-486f-8e08-b56507e5b5b1"). InnerVolumeSpecName "kube-api-access-5xhvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.888850 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360de375-debc-486f-8e08-b56507e5b5b1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "360de375-debc-486f-8e08-b56507e5b5b1" (UID: "360de375-debc-486f-8e08-b56507e5b5b1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.983675 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/360de375-debc-486f-8e08-b56507e5b5b1-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.984047 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xhvn\" (UniqueName: \"kubernetes.io/projected/360de375-debc-486f-8e08-b56507e5b5b1-kube-api-access-5xhvn\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.984066 4707 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/360de375-debc-486f-8e08-b56507e5b5b1-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.984079 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/360de375-debc-486f-8e08-b56507e5b5b1-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:49 crc kubenswrapper[4707]: I1213 06:59:49.997327 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.185401 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-proxy-ca-bundles\") pod \"f458a7da-8d17-4066-8aca-437fd48f9b7d\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.185455 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f458a7da-8d17-4066-8aca-437fd48f9b7d-serving-cert\") pod \"f458a7da-8d17-4066-8aca-437fd48f9b7d\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.185489 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhzhz\" (UniqueName: \"kubernetes.io/projected/f458a7da-8d17-4066-8aca-437fd48f9b7d-kube-api-access-qhzhz\") pod \"f458a7da-8d17-4066-8aca-437fd48f9b7d\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.185524 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-client-ca\") pod \"f458a7da-8d17-4066-8aca-437fd48f9b7d\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.185557 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-config\") pod \"f458a7da-8d17-4066-8aca-437fd48f9b7d\" (UID: \"f458a7da-8d17-4066-8aca-437fd48f9b7d\") " Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.186233 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f458a7da-8d17-4066-8aca-437fd48f9b7d" (UID: "f458a7da-8d17-4066-8aca-437fd48f9b7d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.186474 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-config" (OuterVolumeSpecName: "config") pod "f458a7da-8d17-4066-8aca-437fd48f9b7d" (UID: "f458a7da-8d17-4066-8aca-437fd48f9b7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.186512 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-client-ca" (OuterVolumeSpecName: "client-ca") pod "f458a7da-8d17-4066-8aca-437fd48f9b7d" (UID: "f458a7da-8d17-4066-8aca-437fd48f9b7d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.192118 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f458a7da-8d17-4066-8aca-437fd48f9b7d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f458a7da-8d17-4066-8aca-437fd48f9b7d" (UID: "f458a7da-8d17-4066-8aca-437fd48f9b7d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.192196 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f458a7da-8d17-4066-8aca-437fd48f9b7d-kube-api-access-qhzhz" (OuterVolumeSpecName: "kube-api-access-qhzhz") pod "f458a7da-8d17-4066-8aca-437fd48f9b7d" (UID: "f458a7da-8d17-4066-8aca-437fd48f9b7d"). InnerVolumeSpecName "kube-api-access-qhzhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.287391 4707 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.287430 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f458a7da-8d17-4066-8aca-437fd48f9b7d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.287443 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhzhz\" (UniqueName: \"kubernetes.io/projected/f458a7da-8d17-4066-8aca-437fd48f9b7d-kube-api-access-qhzhz\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.287457 4707 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.287470 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f458a7da-8d17-4066-8aca-437fd48f9b7d-config\") on node \"crc\" DevicePath \"\"" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.582087 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" event={"ID":"f458a7da-8d17-4066-8aca-437fd48f9b7d","Type":"ContainerDied","Data":"9d9c4810d57aa1d810e826da026364fbecdd51316d87b6859b966009803c3c27"} Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.582138 4707 scope.go:117] "RemoveContainer" containerID="c208e17745223669dde7f5f4aa6f32ec0b26a0d73081c4f6f83f1ec134b85e72" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.582278 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67c75588d5-4s7xs" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.591249 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" event={"ID":"360de375-debc-486f-8e08-b56507e5b5b1","Type":"ContainerDied","Data":"c04453240f1722eeddae070421eb46d8b472071def9c7b28abc07a18c17d990c"} Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.591345 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.625050 4707 scope.go:117] "RemoveContainer" containerID="13ace93c81122b4f486ad9e6f3c8d0e171ace46333b74d5431f31fa6450d8197" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.639469 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67c75588d5-4s7xs"] Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.655550 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67c75588d5-4s7xs"] Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.690024 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7"] Dec 13 06:59:50 crc kubenswrapper[4707]: E1213 06:59:50.690323 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="360de375-debc-486f-8e08-b56507e5b5b1" containerName="route-controller-manager" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.690347 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="360de375-debc-486f-8e08-b56507e5b5b1" containerName="route-controller-manager" Dec 13 06:59:50 crc kubenswrapper[4707]: E1213 06:59:50.690362 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0291ce56-de87-4ca7-b7d1-8b4b68de5cbb" containerName="registry-server" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.690370 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="0291ce56-de87-4ca7-b7d1-8b4b68de5cbb" containerName="registry-server" Dec 13 06:59:50 crc kubenswrapper[4707]: E1213 06:59:50.690384 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f36b9c-fa8d-4257-ad64-2ee8a487d037" containerName="registry-server" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.690393 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f36b9c-fa8d-4257-ad64-2ee8a487d037" containerName="registry-server" Dec 13 06:59:50 crc kubenswrapper[4707]: E1213 06:59:50.690402 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f36b9c-fa8d-4257-ad64-2ee8a487d037" containerName="extract-utilities" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.690411 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f36b9c-fa8d-4257-ad64-2ee8a487d037" containerName="extract-utilities" Dec 13 06:59:50 crc kubenswrapper[4707]: E1213 06:59:50.690424 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f36b9c-fa8d-4257-ad64-2ee8a487d037" containerName="extract-content" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.690431 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f36b9c-fa8d-4257-ad64-2ee8a487d037" containerName="extract-content" Dec 13 06:59:50 crc kubenswrapper[4707]: E1213 06:59:50.690443 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0291ce56-de87-4ca7-b7d1-8b4b68de5cbb" containerName="extract-content" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.690450 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="0291ce56-de87-4ca7-b7d1-8b4b68de5cbb" containerName="extract-content" Dec 13 06:59:50 crc kubenswrapper[4707]: E1213 06:59:50.690461 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f458a7da-8d17-4066-8aca-437fd48f9b7d" containerName="controller-manager" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.690469 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="f458a7da-8d17-4066-8aca-437fd48f9b7d" containerName="controller-manager" Dec 13 06:59:50 crc kubenswrapper[4707]: E1213 06:59:50.690480 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87065476-d5a8-4630-883d-60d255622c72" containerName="pruner" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.690488 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="87065476-d5a8-4630-883d-60d255622c72" containerName="pruner" Dec 13 06:59:50 crc kubenswrapper[4707]: E1213 06:59:50.690501 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0291ce56-de87-4ca7-b7d1-8b4b68de5cbb" containerName="extract-utilities" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.690509 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="0291ce56-de87-4ca7-b7d1-8b4b68de5cbb" containerName="extract-utilities" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.690621 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="f458a7da-8d17-4066-8aca-437fd48f9b7d" containerName="controller-manager" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.690641 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f36b9c-fa8d-4257-ad64-2ee8a487d037" containerName="registry-server" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.690650 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="0291ce56-de87-4ca7-b7d1-8b4b68de5cbb" containerName="registry-server" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.690660 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="360de375-debc-486f-8e08-b56507e5b5b1" containerName="route-controller-manager" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.690670 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="87065476-d5a8-4630-883d-60d255622c72" containerName="pruner" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.691160 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.691250 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7"] Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.693341 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-serving-cert\") pod \"route-controller-manager-575f8f4845-zsdj7\" (UID: \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\") " pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.693390 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jrfx\" (UniqueName: \"kubernetes.io/projected/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-kube-api-access-7jrfx\") pod \"route-controller-manager-575f8f4845-zsdj7\" (UID: \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\") " pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.693430 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-config\") pod \"route-controller-manager-575f8f4845-zsdj7\" (UID: \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\") " pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.693453 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-client-ca\") pod \"route-controller-manager-575f8f4845-zsdj7\" (UID: \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\") " pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.695575 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.695832 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.700241 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh"] Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.703560 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7668ff58cf-7czwh"] Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.705412 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.705638 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.705821 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.706105 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.800489 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-serving-cert\") pod \"route-controller-manager-575f8f4845-zsdj7\" (UID: \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\") " pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.800595 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jrfx\" (UniqueName: \"kubernetes.io/projected/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-kube-api-access-7jrfx\") pod \"route-controller-manager-575f8f4845-zsdj7\" (UID: \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\") " pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.800628 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-config\") pod \"route-controller-manager-575f8f4845-zsdj7\" (UID: \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\") " pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.800644 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-client-ca\") pod \"route-controller-manager-575f8f4845-zsdj7\" (UID: \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\") " pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.801647 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-client-ca\") pod \"route-controller-manager-575f8f4845-zsdj7\" (UID: \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\") " pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.802016 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-config\") pod \"route-controller-manager-575f8f4845-zsdj7\" (UID: \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\") " pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.807135 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-serving-cert\") pod \"route-controller-manager-575f8f4845-zsdj7\" (UID: \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\") " pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 06:59:50 crc kubenswrapper[4707]: I1213 06:59:50.821068 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jrfx\" (UniqueName: \"kubernetes.io/projected/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-kube-api-access-7jrfx\") pod \"route-controller-manager-575f8f4845-zsdj7\" (UID: \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\") " pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 06:59:51 crc kubenswrapper[4707]: I1213 06:59:51.012359 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 06:59:51 crc kubenswrapper[4707]: I1213 06:59:51.490885 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7"] Dec 13 06:59:51 crc kubenswrapper[4707]: W1213 06:59:51.498560 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28b006c3_ebd2_4b65_a4fe_588bfc060fbc.slice/crio-6f864902f039cd7621a8f7ea72c6777fda76078d75e41fe7a6bbcc544320b493 WatchSource:0}: Error finding container 6f864902f039cd7621a8f7ea72c6777fda76078d75e41fe7a6bbcc544320b493: Status 404 returned error can't find the container with id 6f864902f039cd7621a8f7ea72c6777fda76078d75e41fe7a6bbcc544320b493 Dec 13 06:59:51 crc kubenswrapper[4707]: I1213 06:59:51.603775 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" event={"ID":"28b006c3-ebd2-4b65-a4fe-588bfc060fbc","Type":"ContainerStarted","Data":"6f864902f039cd7621a8f7ea72c6777fda76078d75e41fe7a6bbcc544320b493"} Dec 13 06:59:51 crc kubenswrapper[4707]: I1213 06:59:51.622126 4707 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 13 06:59:51 crc kubenswrapper[4707]: I1213 06:59:51.755679 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="360de375-debc-486f-8e08-b56507e5b5b1" path="/var/lib/kubelet/pods/360de375-debc-486f-8e08-b56507e5b5b1/volumes" Dec 13 06:59:51 crc kubenswrapper[4707]: I1213 06:59:51.756361 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f458a7da-8d17-4066-8aca-437fd48f9b7d" path="/var/lib/kubelet/pods/f458a7da-8d17-4066-8aca-437fd48f9b7d/volumes" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.610242 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" event={"ID":"28b006c3-ebd2-4b65-a4fe-588bfc060fbc","Type":"ContainerStarted","Data":"8671f09f1e6e751efc9309c95a9c0f06aca2f36c4a80915ef8273e31bb924917"} Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.610581 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.625584 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.627469 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57b4c9f74-mlng8"] Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.630195 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.633764 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.634265 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.634542 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.635094 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.635287 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.643878 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.645581 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" podStartSLOduration=4.645557451 podStartE2EDuration="4.645557451s" podCreationTimestamp="2025-12-13 06:59:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:59:52.631799319 +0000 UTC m=+301.128978396" watchObservedRunningTime="2025-12-13 06:59:52.645557451 +0000 UTC m=+301.142736498" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.646895 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.650771 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57b4c9f74-mlng8"] Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.827760 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-proxy-ca-bundles\") pod \"controller-manager-57b4c9f74-mlng8\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.827807 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w55p9\" (UniqueName: \"kubernetes.io/projected/d48505e5-0a2d-4f70-b105-7c180700351c-kube-api-access-w55p9\") pod \"controller-manager-57b4c9f74-mlng8\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.828009 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-config\") pod \"controller-manager-57b4c9f74-mlng8\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.828069 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48505e5-0a2d-4f70-b105-7c180700351c-serving-cert\") pod \"controller-manager-57b4c9f74-mlng8\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.828097 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-client-ca\") pod \"controller-manager-57b4c9f74-mlng8\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.929004 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-proxy-ca-bundles\") pod \"controller-manager-57b4c9f74-mlng8\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.929437 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w55p9\" (UniqueName: \"kubernetes.io/projected/d48505e5-0a2d-4f70-b105-7c180700351c-kube-api-access-w55p9\") pod \"controller-manager-57b4c9f74-mlng8\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.929558 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-config\") pod \"controller-manager-57b4c9f74-mlng8\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.929593 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48505e5-0a2d-4f70-b105-7c180700351c-serving-cert\") pod \"controller-manager-57b4c9f74-mlng8\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.929626 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-client-ca\") pod \"controller-manager-57b4c9f74-mlng8\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.931098 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-proxy-ca-bundles\") pod \"controller-manager-57b4c9f74-mlng8\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.931276 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-config\") pod \"controller-manager-57b4c9f74-mlng8\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.931297 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-client-ca\") pod \"controller-manager-57b4c9f74-mlng8\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.945236 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48505e5-0a2d-4f70-b105-7c180700351c-serving-cert\") pod \"controller-manager-57b4c9f74-mlng8\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.955072 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w55p9\" (UniqueName: \"kubernetes.io/projected/d48505e5-0a2d-4f70-b105-7c180700351c-kube-api-access-w55p9\") pod \"controller-manager-57b4c9f74-mlng8\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:52 crc kubenswrapper[4707]: I1213 06:59:52.965620 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:53 crc kubenswrapper[4707]: I1213 06:59:53.379393 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57b4c9f74-mlng8"] Dec 13 06:59:53 crc kubenswrapper[4707]: I1213 06:59:53.615933 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" event={"ID":"d48505e5-0a2d-4f70-b105-7c180700351c","Type":"ContainerStarted","Data":"d5ff24a46d80ae71a587c7fe1140866ba78a027bf59034505a516b971c45fafb"} Dec 13 06:59:54 crc kubenswrapper[4707]: I1213 06:59:54.351675 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gkw57" Dec 13 06:59:54 crc kubenswrapper[4707]: I1213 06:59:54.351888 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gkw57" Dec 13 06:59:54 crc kubenswrapper[4707]: I1213 06:59:54.401629 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gkw57" Dec 13 06:59:54 crc kubenswrapper[4707]: I1213 06:59:54.620029 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t685j" Dec 13 06:59:54 crc kubenswrapper[4707]: I1213 06:59:54.620096 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t685j" Dec 13 06:59:54 crc kubenswrapper[4707]: I1213 06:59:54.657927 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t685j" Dec 13 06:59:54 crc kubenswrapper[4707]: I1213 06:59:54.669977 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gkw57" Dec 13 06:59:55 crc kubenswrapper[4707]: I1213 06:59:55.027320 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 06:59:55 crc kubenswrapper[4707]: I1213 06:59:55.027785 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 06:59:55 crc kubenswrapper[4707]: I1213 06:59:55.071756 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 06:59:55 crc kubenswrapper[4707]: I1213 06:59:55.128319 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z47zb" Dec 13 06:59:55 crc kubenswrapper[4707]: I1213 06:59:55.128639 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z47zb" Dec 13 06:59:55 crc kubenswrapper[4707]: I1213 06:59:55.185348 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z47zb" Dec 13 06:59:55 crc kubenswrapper[4707]: I1213 06:59:55.628920 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" event={"ID":"d48505e5-0a2d-4f70-b105-7c180700351c","Type":"ContainerStarted","Data":"cffa5448cb43bcd05c35fc8e050b9fbe37c46df4b8fad664fa91b8f7b183c6d0"} Dec 13 06:59:55 crc kubenswrapper[4707]: I1213 06:59:55.650751 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" podStartSLOduration=7.650731363 podStartE2EDuration="7.650731363s" podCreationTimestamp="2025-12-13 06:59:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 06:59:55.649870281 +0000 UTC m=+304.147049328" watchObservedRunningTime="2025-12-13 06:59:55.650731363 +0000 UTC m=+304.147910410" Dec 13 06:59:55 crc kubenswrapper[4707]: I1213 06:59:55.672649 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 06:59:55 crc kubenswrapper[4707]: I1213 06:59:55.673978 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z47zb" Dec 13 06:59:55 crc kubenswrapper[4707]: I1213 06:59:55.685598 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t685j" Dec 13 06:59:56 crc kubenswrapper[4707]: I1213 06:59:56.638070 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:56 crc kubenswrapper[4707]: I1213 06:59:56.644378 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 06:59:57 crc kubenswrapper[4707]: I1213 06:59:57.458762 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhlhc"] Dec 13 06:59:58 crc kubenswrapper[4707]: I1213 06:59:58.057038 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t685j"] Dec 13 06:59:58 crc kubenswrapper[4707]: I1213 06:59:58.057553 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t685j" podUID="31a88c0f-62fb-4209-8acf-b106c6d03d6e" containerName="registry-server" containerID="cri-o://c582b2bbb37344f19bfd1c6dcc9fe5aa1eb2afc6e591d38a9e88e47bb36b3924" gracePeriod=2 Dec 13 06:59:58 crc kubenswrapper[4707]: I1213 06:59:58.648253 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rhlhc" podUID="5a4673eb-3880-4df1-b8d3-95f31c7af7dd" containerName="registry-server" containerID="cri-o://dad91e21a401e5a21ec708b04a46557cf7d33536f77712c922196f2f04666c94" gracePeriod=2 Dec 13 06:59:59 crc kubenswrapper[4707]: I1213 06:59:59.659742 4707 generic.go:334] "Generic (PLEG): container finished" podID="31a88c0f-62fb-4209-8acf-b106c6d03d6e" containerID="c582b2bbb37344f19bfd1c6dcc9fe5aa1eb2afc6e591d38a9e88e47bb36b3924" exitCode=0 Dec 13 06:59:59 crc kubenswrapper[4707]: I1213 06:59:59.659787 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t685j" event={"ID":"31a88c0f-62fb-4209-8acf-b106c6d03d6e","Type":"ContainerDied","Data":"c582b2bbb37344f19bfd1c6dcc9fe5aa1eb2afc6e591d38a9e88e47bb36b3924"} Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.208103 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd"] Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.209436 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.220087 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.220310 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.234027 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd"] Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.250994 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7852a74f-5273-47f4-8606-82c79b0a2179-secret-volume\") pod \"collect-profiles-29426820-5npdd\" (UID: \"7852a74f-5273-47f4-8606-82c79b0a2179\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.251032 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz4fn\" (UniqueName: \"kubernetes.io/projected/7852a74f-5273-47f4-8606-82c79b0a2179-kube-api-access-pz4fn\") pod \"collect-profiles-29426820-5npdd\" (UID: \"7852a74f-5273-47f4-8606-82c79b0a2179\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.251052 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7852a74f-5273-47f4-8606-82c79b0a2179-config-volume\") pod \"collect-profiles-29426820-5npdd\" (UID: \"7852a74f-5273-47f4-8606-82c79b0a2179\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.352452 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7852a74f-5273-47f4-8606-82c79b0a2179-secret-volume\") pod \"collect-profiles-29426820-5npdd\" (UID: \"7852a74f-5273-47f4-8606-82c79b0a2179\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.352516 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz4fn\" (UniqueName: \"kubernetes.io/projected/7852a74f-5273-47f4-8606-82c79b0a2179-kube-api-access-pz4fn\") pod \"collect-profiles-29426820-5npdd\" (UID: \"7852a74f-5273-47f4-8606-82c79b0a2179\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.352537 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7852a74f-5273-47f4-8606-82c79b0a2179-config-volume\") pod \"collect-profiles-29426820-5npdd\" (UID: \"7852a74f-5273-47f4-8606-82c79b0a2179\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.353748 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7852a74f-5273-47f4-8606-82c79b0a2179-config-volume\") pod \"collect-profiles-29426820-5npdd\" (UID: \"7852a74f-5273-47f4-8606-82c79b0a2179\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.358700 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7852a74f-5273-47f4-8606-82c79b0a2179-secret-volume\") pod \"collect-profiles-29426820-5npdd\" (UID: \"7852a74f-5273-47f4-8606-82c79b0a2179\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.373282 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz4fn\" (UniqueName: \"kubernetes.io/projected/7852a74f-5273-47f4-8606-82c79b0a2179-kube-api-access-pz4fn\") pod \"collect-profiles-29426820-5npdd\" (UID: \"7852a74f-5273-47f4-8606-82c79b0a2179\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.406223 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t685j" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.453892 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a88c0f-62fb-4209-8acf-b106c6d03d6e-catalog-content\") pod \"31a88c0f-62fb-4209-8acf-b106c6d03d6e\" (UID: \"31a88c0f-62fb-4209-8acf-b106c6d03d6e\") " Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.453968 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a88c0f-62fb-4209-8acf-b106c6d03d6e-utilities\") pod \"31a88c0f-62fb-4209-8acf-b106c6d03d6e\" (UID: \"31a88c0f-62fb-4209-8acf-b106c6d03d6e\") " Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.454092 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrmtn\" (UniqueName: \"kubernetes.io/projected/31a88c0f-62fb-4209-8acf-b106c6d03d6e-kube-api-access-qrmtn\") pod \"31a88c0f-62fb-4209-8acf-b106c6d03d6e\" (UID: \"31a88c0f-62fb-4209-8acf-b106c6d03d6e\") " Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.454871 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31a88c0f-62fb-4209-8acf-b106c6d03d6e-utilities" (OuterVolumeSpecName: "utilities") pod "31a88c0f-62fb-4209-8acf-b106c6d03d6e" (UID: "31a88c0f-62fb-4209-8acf-b106c6d03d6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.457121 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31a88c0f-62fb-4209-8acf-b106c6d03d6e-kube-api-access-qrmtn" (OuterVolumeSpecName: "kube-api-access-qrmtn") pod "31a88c0f-62fb-4209-8acf-b106c6d03d6e" (UID: "31a88c0f-62fb-4209-8acf-b106c6d03d6e"). InnerVolumeSpecName "kube-api-access-qrmtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.504373 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31a88c0f-62fb-4209-8acf-b106c6d03d6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31a88c0f-62fb-4209-8acf-b106c6d03d6e" (UID: "31a88c0f-62fb-4209-8acf-b106c6d03d6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.555233 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrmtn\" (UniqueName: \"kubernetes.io/projected/31a88c0f-62fb-4209-8acf-b106c6d03d6e-kube-api-access-qrmtn\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.555339 4707 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a88c0f-62fb-4209-8acf-b106c6d03d6e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.555358 4707 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a88c0f-62fb-4209-8acf-b106c6d03d6e-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.564205 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.681492 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t685j" event={"ID":"31a88c0f-62fb-4209-8acf-b106c6d03d6e","Type":"ContainerDied","Data":"d9225773aff29dbb8816520b1a46f0f688b17b3d2d1c9af642ee1933de237063"} Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.681597 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t685j" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.681909 4707 scope.go:117] "RemoveContainer" containerID="c582b2bbb37344f19bfd1c6dcc9fe5aa1eb2afc6e591d38a9e88e47bb36b3924" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.696360 4707 generic.go:334] "Generic (PLEG): container finished" podID="5a4673eb-3880-4df1-b8d3-95f31c7af7dd" containerID="dad91e21a401e5a21ec708b04a46557cf7d33536f77712c922196f2f04666c94" exitCode=0 Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.696410 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhlhc" event={"ID":"5a4673eb-3880-4df1-b8d3-95f31c7af7dd","Type":"ContainerDied","Data":"dad91e21a401e5a21ec708b04a46557cf7d33536f77712c922196f2f04666c94"} Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.752999 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t685j"] Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.756428 4707 scope.go:117] "RemoveContainer" containerID="714dff0439ad62d0b62818cc2a0920872ebc00889ffa109b6b8bd778cf5439e7" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.761158 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t685j"] Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.808589 4707 scope.go:117] "RemoveContainer" containerID="f983c7327702429525f69fc517a30df8cd4ca10d7bfd13c2ebe704a79e6dce9b" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.883560 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.960807 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-utilities\") pod \"5a4673eb-3880-4df1-b8d3-95f31c7af7dd\" (UID: \"5a4673eb-3880-4df1-b8d3-95f31c7af7dd\") " Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.962134 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-utilities" (OuterVolumeSpecName: "utilities") pod "5a4673eb-3880-4df1-b8d3-95f31c7af7dd" (UID: "5a4673eb-3880-4df1-b8d3-95f31c7af7dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.962681 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-562nf\" (UniqueName: \"kubernetes.io/projected/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-kube-api-access-562nf\") pod \"5a4673eb-3880-4df1-b8d3-95f31c7af7dd\" (UID: \"5a4673eb-3880-4df1-b8d3-95f31c7af7dd\") " Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.962891 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-catalog-content\") pod \"5a4673eb-3880-4df1-b8d3-95f31c7af7dd\" (UID: \"5a4673eb-3880-4df1-b8d3-95f31c7af7dd\") " Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.963614 4707 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:00 crc kubenswrapper[4707]: I1213 07:00:00.975189 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-kube-api-access-562nf" (OuterVolumeSpecName: "kube-api-access-562nf") pod "5a4673eb-3880-4df1-b8d3-95f31c7af7dd" (UID: "5a4673eb-3880-4df1-b8d3-95f31c7af7dd"). InnerVolumeSpecName "kube-api-access-562nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:00:01 crc kubenswrapper[4707]: I1213 07:00:01.021264 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a4673eb-3880-4df1-b8d3-95f31c7af7dd" (UID: "5a4673eb-3880-4df1-b8d3-95f31c7af7dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:00:01 crc kubenswrapper[4707]: I1213 07:00:01.050480 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd"] Dec 13 07:00:01 crc kubenswrapper[4707]: I1213 07:00:01.064554 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-562nf\" (UniqueName: \"kubernetes.io/projected/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-kube-api-access-562nf\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:01 crc kubenswrapper[4707]: I1213 07:00:01.064583 4707 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a4673eb-3880-4df1-b8d3-95f31c7af7dd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:01 crc kubenswrapper[4707]: I1213 07:00:01.705421 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhlhc" event={"ID":"5a4673eb-3880-4df1-b8d3-95f31c7af7dd","Type":"ContainerDied","Data":"6ad26c4ccb4d3ab72779933226ab5ddc44b692dc908ed6b4e4cde5454d5e58bc"} Dec 13 07:00:01 crc kubenswrapper[4707]: I1213 07:00:01.705473 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhlhc" Dec 13 07:00:01 crc kubenswrapper[4707]: I1213 07:00:01.705484 4707 scope.go:117] "RemoveContainer" containerID="dad91e21a401e5a21ec708b04a46557cf7d33536f77712c922196f2f04666c94" Dec 13 07:00:01 crc kubenswrapper[4707]: I1213 07:00:01.711938 4707 generic.go:334] "Generic (PLEG): container finished" podID="7852a74f-5273-47f4-8606-82c79b0a2179" containerID="4f36b6f6c24cd53f6807fd0c1a8d92510bce8fa46f6158e9472aa84a1e0eee3a" exitCode=0 Dec 13 07:00:01 crc kubenswrapper[4707]: I1213 07:00:01.712088 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd" event={"ID":"7852a74f-5273-47f4-8606-82c79b0a2179","Type":"ContainerDied","Data":"4f36b6f6c24cd53f6807fd0c1a8d92510bce8fa46f6158e9472aa84a1e0eee3a"} Dec 13 07:00:01 crc kubenswrapper[4707]: I1213 07:00:01.712114 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd" event={"ID":"7852a74f-5273-47f4-8606-82c79b0a2179","Type":"ContainerStarted","Data":"a12c9a0f09cc20fc0b5eb3f4a2579935a89a2d151d56858fd4802ce3ec8d51ed"} Dec 13 07:00:01 crc kubenswrapper[4707]: I1213 07:00:01.735982 4707 scope.go:117] "RemoveContainer" containerID="03d1ceb967184c7f2e3a37fc8de32e61f90402240531df73fb8b5525770ba09f" Dec 13 07:00:01 crc kubenswrapper[4707]: I1213 07:00:01.738661 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhlhc"] Dec 13 07:00:01 crc kubenswrapper[4707]: I1213 07:00:01.754842 4707 scope.go:117] "RemoveContainer" containerID="0171a22af15900257d8d8acb8390afd5657994da8d17605c213093ca218da1bd" Dec 13 07:00:01 crc kubenswrapper[4707]: I1213 07:00:01.756260 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31a88c0f-62fb-4209-8acf-b106c6d03d6e" path="/var/lib/kubelet/pods/31a88c0f-62fb-4209-8acf-b106c6d03d6e/volumes" Dec 13 07:00:01 crc kubenswrapper[4707]: I1213 07:00:01.756943 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rhlhc"] Dec 13 07:00:03 crc kubenswrapper[4707]: I1213 07:00:03.039887 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd" Dec 13 07:00:03 crc kubenswrapper[4707]: I1213 07:00:03.214288 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz4fn\" (UniqueName: \"kubernetes.io/projected/7852a74f-5273-47f4-8606-82c79b0a2179-kube-api-access-pz4fn\") pod \"7852a74f-5273-47f4-8606-82c79b0a2179\" (UID: \"7852a74f-5273-47f4-8606-82c79b0a2179\") " Dec 13 07:00:03 crc kubenswrapper[4707]: I1213 07:00:03.214381 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7852a74f-5273-47f4-8606-82c79b0a2179-secret-volume\") pod \"7852a74f-5273-47f4-8606-82c79b0a2179\" (UID: \"7852a74f-5273-47f4-8606-82c79b0a2179\") " Dec 13 07:00:03 crc kubenswrapper[4707]: I1213 07:00:03.214402 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7852a74f-5273-47f4-8606-82c79b0a2179-config-volume\") pod \"7852a74f-5273-47f4-8606-82c79b0a2179\" (UID: \"7852a74f-5273-47f4-8606-82c79b0a2179\") " Dec 13 07:00:03 crc kubenswrapper[4707]: I1213 07:00:03.215518 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7852a74f-5273-47f4-8606-82c79b0a2179-config-volume" (OuterVolumeSpecName: "config-volume") pod "7852a74f-5273-47f4-8606-82c79b0a2179" (UID: "7852a74f-5273-47f4-8606-82c79b0a2179"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:00:03 crc kubenswrapper[4707]: I1213 07:00:03.215697 4707 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7852a74f-5273-47f4-8606-82c79b0a2179-config-volume\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:03 crc kubenswrapper[4707]: I1213 07:00:03.220615 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7852a74f-5273-47f4-8606-82c79b0a2179-kube-api-access-pz4fn" (OuterVolumeSpecName: "kube-api-access-pz4fn") pod "7852a74f-5273-47f4-8606-82c79b0a2179" (UID: "7852a74f-5273-47f4-8606-82c79b0a2179"). InnerVolumeSpecName "kube-api-access-pz4fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:00:03 crc kubenswrapper[4707]: I1213 07:00:03.233130 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7852a74f-5273-47f4-8606-82c79b0a2179-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7852a74f-5273-47f4-8606-82c79b0a2179" (UID: "7852a74f-5273-47f4-8606-82c79b0a2179"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:00:03 crc kubenswrapper[4707]: I1213 07:00:03.316477 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz4fn\" (UniqueName: \"kubernetes.io/projected/7852a74f-5273-47f4-8606-82c79b0a2179-kube-api-access-pz4fn\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:03 crc kubenswrapper[4707]: I1213 07:00:03.316514 4707 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7852a74f-5273-47f4-8606-82c79b0a2179-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:03 crc kubenswrapper[4707]: I1213 07:00:03.731761 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd" event={"ID":"7852a74f-5273-47f4-8606-82c79b0a2179","Type":"ContainerDied","Data":"a12c9a0f09cc20fc0b5eb3f4a2579935a89a2d151d56858fd4802ce3ec8d51ed"} Dec 13 07:00:03 crc kubenswrapper[4707]: I1213 07:00:03.731804 4707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a12c9a0f09cc20fc0b5eb3f4a2579935a89a2d151d56858fd4802ce3ec8d51ed" Dec 13 07:00:03 crc kubenswrapper[4707]: I1213 07:00:03.731869 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426820-5npdd" Dec 13 07:00:03 crc kubenswrapper[4707]: I1213 07:00:03.753047 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a4673eb-3880-4df1-b8d3-95f31c7af7dd" path="/var/lib/kubelet/pods/5a4673eb-3880-4df1-b8d3-95f31c7af7dd/volumes" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.852520 4707 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 07:00:04 crc kubenswrapper[4707]: E1213 07:00:04.852741 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7852a74f-5273-47f4-8606-82c79b0a2179" containerName="collect-profiles" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.852758 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="7852a74f-5273-47f4-8606-82c79b0a2179" containerName="collect-profiles" Dec 13 07:00:04 crc kubenswrapper[4707]: E1213 07:00:04.852772 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4673eb-3880-4df1-b8d3-95f31c7af7dd" containerName="extract-content" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.852779 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4673eb-3880-4df1-b8d3-95f31c7af7dd" containerName="extract-content" Dec 13 07:00:04 crc kubenswrapper[4707]: E1213 07:00:04.852792 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4673eb-3880-4df1-b8d3-95f31c7af7dd" containerName="extract-utilities" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.852798 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4673eb-3880-4df1-b8d3-95f31c7af7dd" containerName="extract-utilities" Dec 13 07:00:04 crc kubenswrapper[4707]: E1213 07:00:04.852811 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a88c0f-62fb-4209-8acf-b106c6d03d6e" containerName="registry-server" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.852819 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a88c0f-62fb-4209-8acf-b106c6d03d6e" containerName="registry-server" Dec 13 07:00:04 crc kubenswrapper[4707]: E1213 07:00:04.852830 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a88c0f-62fb-4209-8acf-b106c6d03d6e" containerName="extract-utilities" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.852836 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a88c0f-62fb-4209-8acf-b106c6d03d6e" containerName="extract-utilities" Dec 13 07:00:04 crc kubenswrapper[4707]: E1213 07:00:04.852845 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4673eb-3880-4df1-b8d3-95f31c7af7dd" containerName="registry-server" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.852852 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4673eb-3880-4df1-b8d3-95f31c7af7dd" containerName="registry-server" Dec 13 07:00:04 crc kubenswrapper[4707]: E1213 07:00:04.852866 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a88c0f-62fb-4209-8acf-b106c6d03d6e" containerName="extract-content" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.852873 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a88c0f-62fb-4209-8acf-b106c6d03d6e" containerName="extract-content" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.852989 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="7852a74f-5273-47f4-8606-82c79b0a2179" containerName="collect-profiles" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.853010 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a4673eb-3880-4df1-b8d3-95f31c7af7dd" containerName="registry-server" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.853021 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a88c0f-62fb-4209-8acf-b106c6d03d6e" containerName="registry-server" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.861439 4707 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.861495 4707 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 07:00:04 crc kubenswrapper[4707]: E1213 07:00:04.861852 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.861870 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 07:00:04 crc kubenswrapper[4707]: E1213 07:00:04.861883 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.861890 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 13 07:00:04 crc kubenswrapper[4707]: E1213 07:00:04.861898 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.861905 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 13 07:00:04 crc kubenswrapper[4707]: E1213 07:00:04.861925 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.861969 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 13 07:00:04 crc kubenswrapper[4707]: E1213 07:00:04.861988 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.861994 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 13 07:00:04 crc kubenswrapper[4707]: E1213 07:00:04.862007 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.862013 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 07:00:04 crc kubenswrapper[4707]: E1213 07:00:04.862023 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.862035 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.862301 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.862330 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.862339 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.862350 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.862357 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.862365 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 13 07:00:04 crc kubenswrapper[4707]: E1213 07:00:04.862556 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.862569 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.862752 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.864683 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.865596 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://8482e5f939a824f858bf8dd2192e01d5a417e3635f1b5fe3e8f43fc303d16537" gracePeriod=15 Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.865672 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://71001e5091ac315959a1d8c05707b4e1698db42bc2e88a7f85ed698cb8e2ab05" gracePeriod=15 Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.865694 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://bfa3908988091d74d1ded56447cdada5eabbd8d0ac4d3ede46becc852ee1d9f2" gracePeriod=15 Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.866112 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://292c2efd17bb51a91a2e76b5307dddeefb0b5805dc141282af307cb075c6b045" gracePeriod=15 Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.865624 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8a59cadcc6366b85de59b16e7de5b8f1aa728727bfc8bfdf756a3026f8a68641" gracePeriod=15 Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.868971 4707 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 13 07:00:04 crc kubenswrapper[4707]: I1213 07:00:04.921351 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.034390 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.034462 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.034510 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.034532 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.034572 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.034615 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.034722 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.034814 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.136567 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.136634 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.136652 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.136668 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.136701 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.136724 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.136741 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.136771 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.136833 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.136869 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.136889 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.136910 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.136929 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.136965 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.136994 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.137023 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.212866 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:05 crc kubenswrapper[4707]: W1213 07:00:05.238579 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-756b0f58688363231d671f32a2e50906c3ee4c1298ba5110a4059f73f65ac122 WatchSource:0}: Error finding container 756b0f58688363231d671f32a2e50906c3ee4c1298ba5110a4059f73f65ac122: Status 404 returned error can't find the container with id 756b0f58688363231d671f32a2e50906c3ee4c1298ba5110a4059f73f65ac122 Dec 13 07:00:05 crc kubenswrapper[4707]: I1213 07:00:05.754098 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"756b0f58688363231d671f32a2e50906c3ee4c1298ba5110a4059f73f65ac122"} Dec 13 07:00:06 crc kubenswrapper[4707]: I1213 07:00:06.755810 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 13 07:00:06 crc kubenswrapper[4707]: I1213 07:00:06.759535 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 13 07:00:06 crc kubenswrapper[4707]: I1213 07:00:06.761046 4707 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="292c2efd17bb51a91a2e76b5307dddeefb0b5805dc141282af307cb075c6b045" exitCode=2 Dec 13 07:00:07 crc kubenswrapper[4707]: E1213 07:00:07.094723 4707 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.210:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1880b43820fab1b8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 07:00:07.093703096 +0000 UTC m=+315.590882183,LastTimestamp:2025-12-13 07:00:07.093703096 +0000 UTC m=+315.590882183,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 07:00:07 crc kubenswrapper[4707]: E1213 07:00:07.397304 4707 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.210:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1880b43820fab1b8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 07:00:07.093703096 +0000 UTC m=+315.590882183,LastTimestamp:2025-12-13 07:00:07.093703096 +0000 UTC m=+315.590882183,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 07:00:07 crc kubenswrapper[4707]: I1213 07:00:07.768351 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 13 07:00:07 crc kubenswrapper[4707]: I1213 07:00:07.770385 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 13 07:00:07 crc kubenswrapper[4707]: I1213 07:00:07.771256 4707 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8a59cadcc6366b85de59b16e7de5b8f1aa728727bfc8bfdf756a3026f8a68641" exitCode=0 Dec 13 07:00:07 crc kubenswrapper[4707]: I1213 07:00:07.771293 4707 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="71001e5091ac315959a1d8c05707b4e1698db42bc2e88a7f85ed698cb8e2ab05" exitCode=0 Dec 13 07:00:07 crc kubenswrapper[4707]: I1213 07:00:07.771310 4707 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bfa3908988091d74d1ded56447cdada5eabbd8d0ac4d3ede46becc852ee1d9f2" exitCode=0 Dec 13 07:00:07 crc kubenswrapper[4707]: I1213 07:00:07.771378 4707 scope.go:117] "RemoveContainer" containerID="63da4a417bfbe4082850f141d844fac900ad036277ca139d72742502a9981b9e" Dec 13 07:00:07 crc kubenswrapper[4707]: I1213 07:00:07.773360 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"04cb92adb7dfde78a00204640604c72795ca67db2828c5428844ea518d46102a"} Dec 13 07:00:07 crc kubenswrapper[4707]: I1213 07:00:07.774108 4707 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:07 crc kubenswrapper[4707]: I1213 07:00:07.776138 4707 generic.go:334] "Generic (PLEG): container finished" podID="4e6767a5-947b-454d-80b7-585d10aebb4f" containerID="6f3bee4dba2652058c10ccf7fa59ef8084effe4b550ab56a44a0747c9c146662" exitCode=0 Dec 13 07:00:07 crc kubenswrapper[4707]: I1213 07:00:07.776186 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4e6767a5-947b-454d-80b7-585d10aebb4f","Type":"ContainerDied","Data":"6f3bee4dba2652058c10ccf7fa59ef8084effe4b550ab56a44a0747c9c146662"} Dec 13 07:00:07 crc kubenswrapper[4707]: I1213 07:00:07.776657 4707 status_manager.go:851] "Failed to get status for pod" podUID="4e6767a5-947b-454d-80b7-585d10aebb4f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:07 crc kubenswrapper[4707]: I1213 07:00:07.776918 4707 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:08 crc kubenswrapper[4707]: I1213 07:00:08.793774 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 13 07:00:08 crc kubenswrapper[4707]: I1213 07:00:08.795348 4707 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8482e5f939a824f858bf8dd2192e01d5a417e3635f1b5fe3e8f43fc303d16537" exitCode=0 Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.075909 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.077100 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.077844 4707 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.078202 4707 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.078426 4707 status_manager.go:851] "Failed to get status for pod" podUID="4e6767a5-947b-454d-80b7-585d10aebb4f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.191373 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.191447 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.191473 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.192050 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.192083 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.192099 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.203339 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.203863 4707 status_manager.go:851] "Failed to get status for pod" podUID="4e6767a5-947b-454d-80b7-585d10aebb4f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.204231 4707 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.204591 4707 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.293429 4707 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.293466 4707 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.293477 4707 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.394477 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4e6767a5-947b-454d-80b7-585d10aebb4f-var-lock\") pod \"4e6767a5-947b-454d-80b7-585d10aebb4f\" (UID: \"4e6767a5-947b-454d-80b7-585d10aebb4f\") " Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.394572 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e6767a5-947b-454d-80b7-585d10aebb4f-kube-api-access\") pod \"4e6767a5-947b-454d-80b7-585d10aebb4f\" (UID: \"4e6767a5-947b-454d-80b7-585d10aebb4f\") " Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.394598 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e6767a5-947b-454d-80b7-585d10aebb4f-kubelet-dir\") pod \"4e6767a5-947b-454d-80b7-585d10aebb4f\" (UID: \"4e6767a5-947b-454d-80b7-585d10aebb4f\") " Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.394628 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e6767a5-947b-454d-80b7-585d10aebb4f-var-lock" (OuterVolumeSpecName: "var-lock") pod "4e6767a5-947b-454d-80b7-585d10aebb4f" (UID: "4e6767a5-947b-454d-80b7-585d10aebb4f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.394776 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e6767a5-947b-454d-80b7-585d10aebb4f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4e6767a5-947b-454d-80b7-585d10aebb4f" (UID: "4e6767a5-947b-454d-80b7-585d10aebb4f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.395142 4707 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4e6767a5-947b-454d-80b7-585d10aebb4f-var-lock\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.395166 4707 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e6767a5-947b-454d-80b7-585d10aebb4f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.401305 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e6767a5-947b-454d-80b7-585d10aebb4f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4e6767a5-947b-454d-80b7-585d10aebb4f" (UID: "4e6767a5-947b-454d-80b7-585d10aebb4f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.495925 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e6767a5-947b-454d-80b7-585d10aebb4f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.752783 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.802113 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4e6767a5-947b-454d-80b7-585d10aebb4f","Type":"ContainerDied","Data":"fe4394ce7b9c029ecda2e11e19c8f27321cc700f54a9cd2ed11a6301ddfe8649"} Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.802160 4707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe4394ce7b9c029ecda2e11e19c8f27321cc700f54a9cd2ed11a6301ddfe8649" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.802126 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.805720 4707 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.805862 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.805908 4707 status_manager.go:851] "Failed to get status for pod" podUID="4e6767a5-947b-454d-80b7-585d10aebb4f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.806553 4707 scope.go:117] "RemoveContainer" containerID="8a59cadcc6366b85de59b16e7de5b8f1aa728727bfc8bfdf756a3026f8a68641" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.806607 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.807231 4707 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.807752 4707 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.808387 4707 status_manager.go:851] "Failed to get status for pod" podUID="4e6767a5-947b-454d-80b7-585d10aebb4f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.810860 4707 status_manager.go:851] "Failed to get status for pod" podUID="4e6767a5-947b-454d-80b7-585d10aebb4f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.811068 4707 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.811257 4707 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.885063 4707 scope.go:117] "RemoveContainer" containerID="71001e5091ac315959a1d8c05707b4e1698db42bc2e88a7f85ed698cb8e2ab05" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.900217 4707 scope.go:117] "RemoveContainer" containerID="bfa3908988091d74d1ded56447cdada5eabbd8d0ac4d3ede46becc852ee1d9f2" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.914943 4707 scope.go:117] "RemoveContainer" containerID="292c2efd17bb51a91a2e76b5307dddeefb0b5805dc141282af307cb075c6b045" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.929502 4707 scope.go:117] "RemoveContainer" containerID="8482e5f939a824f858bf8dd2192e01d5a417e3635f1b5fe3e8f43fc303d16537" Dec 13 07:00:09 crc kubenswrapper[4707]: I1213 07:00:09.946138 4707 scope.go:117] "RemoveContainer" containerID="19d1801c131f547543c4875f69857a5ded53ad3237ca6bc279e1f51260699273" Dec 13 07:00:11 crc kubenswrapper[4707]: I1213 07:00:11.747838 4707 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:11 crc kubenswrapper[4707]: I1213 07:00:11.749158 4707 status_manager.go:851] "Failed to get status for pod" podUID="4e6767a5-947b-454d-80b7-585d10aebb4f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:11 crc kubenswrapper[4707]: I1213 07:00:11.749430 4707 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:15 crc kubenswrapper[4707]: E1213 07:00:15.143495 4707 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:15 crc kubenswrapper[4707]: E1213 07:00:15.146231 4707 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:15 crc kubenswrapper[4707]: E1213 07:00:15.146684 4707 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:15 crc kubenswrapper[4707]: E1213 07:00:15.147046 4707 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:15 crc kubenswrapper[4707]: E1213 07:00:15.147380 4707 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:15 crc kubenswrapper[4707]: I1213 07:00:15.147435 4707 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 13 07:00:15 crc kubenswrapper[4707]: E1213 07:00:15.147775 4707 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.210:6443: connect: connection refused" interval="200ms" Dec 13 07:00:15 crc kubenswrapper[4707]: E1213 07:00:15.348680 4707 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.210:6443: connect: connection refused" interval="400ms" Dec 13 07:00:15 crc kubenswrapper[4707]: E1213 07:00:15.749280 4707 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.210:6443: connect: connection refused" interval="800ms" Dec 13 07:00:16 crc kubenswrapper[4707]: E1213 07:00:16.550607 4707 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.210:6443: connect: connection refused" interval="1.6s" Dec 13 07:00:17 crc kubenswrapper[4707]: E1213 07:00:17.398131 4707 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.210:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1880b43820fab1b8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-13 07:00:07.093703096 +0000 UTC m=+315.590882183,LastTimestamp:2025-12-13 07:00:07.093703096 +0000 UTC m=+315.590882183,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 13 07:00:18 crc kubenswrapper[4707]: E1213 07:00:18.151238 4707 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.210:6443: connect: connection refused" interval="3.2s" Dec 13 07:00:18 crc kubenswrapper[4707]: I1213 07:00:18.744572 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:18 crc kubenswrapper[4707]: I1213 07:00:18.745566 4707 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:18 crc kubenswrapper[4707]: I1213 07:00:18.746185 4707 status_manager.go:851] "Failed to get status for pod" podUID="4e6767a5-947b-454d-80b7-585d10aebb4f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:18 crc kubenswrapper[4707]: I1213 07:00:18.772293 4707 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8bed4c7e-8a92-483b-9aca-47b6cdfba745" Dec 13 07:00:18 crc kubenswrapper[4707]: I1213 07:00:18.772328 4707 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8bed4c7e-8a92-483b-9aca-47b6cdfba745" Dec 13 07:00:18 crc kubenswrapper[4707]: E1213 07:00:18.772782 4707 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:18 crc kubenswrapper[4707]: I1213 07:00:18.773348 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:18 crc kubenswrapper[4707]: W1213 07:00:18.814836 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-6c3660c5cf140d61d8282f6f691375210ba2f8fcdefcb34e7d3adc550e7a2d11 WatchSource:0}: Error finding container 6c3660c5cf140d61d8282f6f691375210ba2f8fcdefcb34e7d3adc550e7a2d11: Status 404 returned error can't find the container with id 6c3660c5cf140d61d8282f6f691375210ba2f8fcdefcb34e7d3adc550e7a2d11 Dec 13 07:00:18 crc kubenswrapper[4707]: I1213 07:00:18.855965 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6c3660c5cf140d61d8282f6f691375210ba2f8fcdefcb34e7d3adc550e7a2d11"} Dec 13 07:00:19 crc kubenswrapper[4707]: I1213 07:00:19.862633 4707 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f952f7f1cb1dce028dab05552f252743496d87cc56d88c0d590d3e02da36649d" exitCode=0 Dec 13 07:00:19 crc kubenswrapper[4707]: I1213 07:00:19.862692 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f952f7f1cb1dce028dab05552f252743496d87cc56d88c0d590d3e02da36649d"} Dec 13 07:00:19 crc kubenswrapper[4707]: I1213 07:00:19.862947 4707 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8bed4c7e-8a92-483b-9aca-47b6cdfba745" Dec 13 07:00:19 crc kubenswrapper[4707]: I1213 07:00:19.863001 4707 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8bed4c7e-8a92-483b-9aca-47b6cdfba745" Dec 13 07:00:19 crc kubenswrapper[4707]: I1213 07:00:19.863473 4707 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:19 crc kubenswrapper[4707]: E1213 07:00:19.863553 4707 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:19 crc kubenswrapper[4707]: I1213 07:00:19.863849 4707 status_manager.go:851] "Failed to get status for pod" podUID="4e6767a5-947b-454d-80b7-585d10aebb4f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.210:6443: connect: connection refused" Dec 13 07:00:20 crc kubenswrapper[4707]: I1213 07:00:20.870977 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 13 07:00:20 crc kubenswrapper[4707]: I1213 07:00:20.871257 4707 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="cef91cfdee168cf2ee15b7d5568481cc380d4e4b8c4be32587e8cab097d26a90" exitCode=1 Dec 13 07:00:20 crc kubenswrapper[4707]: I1213 07:00:20.871325 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"cef91cfdee168cf2ee15b7d5568481cc380d4e4b8c4be32587e8cab097d26a90"} Dec 13 07:00:20 crc kubenswrapper[4707]: I1213 07:00:20.871817 4707 scope.go:117] "RemoveContainer" containerID="cef91cfdee168cf2ee15b7d5568481cc380d4e4b8c4be32587e8cab097d26a90" Dec 13 07:00:20 crc kubenswrapper[4707]: I1213 07:00:20.875841 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9bb344347655efe8a913a76e7c901982439cc5d18efbac8ca13dcc2efb0e4381"} Dec 13 07:00:20 crc kubenswrapper[4707]: I1213 07:00:20.875877 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ebf8f9ab609cac8e9dcb939f4f2b594d9927450f51f4ac300cfb04b9a2c76e33"} Dec 13 07:00:20 crc kubenswrapper[4707]: I1213 07:00:20.875895 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b1bd4c0e785ec3abb3c87f40a8e168e2ebc9bd45e277076eda092325757881c5"} Dec 13 07:00:20 crc kubenswrapper[4707]: I1213 07:00:20.875927 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"251903b534fdb57344f2792bc5e4b6d3a1ab846a7f7e4869b7ce3143bf5d2b6f"} Dec 13 07:00:21 crc kubenswrapper[4707]: I1213 07:00:21.883726 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 13 07:00:21 crc kubenswrapper[4707]: I1213 07:00:21.884032 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"21572f98cf25c58fd35771721fbd69ff69a30f6d7a4d209df2baed29ed2ba44d"} Dec 13 07:00:21 crc kubenswrapper[4707]: I1213 07:00:21.887831 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2f13b6926bfab323377204f03571b44515d25c2ae76b20fe88dddfd6204c8fc5"} Dec 13 07:00:21 crc kubenswrapper[4707]: I1213 07:00:21.887996 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:21 crc kubenswrapper[4707]: I1213 07:00:21.888072 4707 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8bed4c7e-8a92-483b-9aca-47b6cdfba745" Dec 13 07:00:21 crc kubenswrapper[4707]: I1213 07:00:21.888098 4707 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8bed4c7e-8a92-483b-9aca-47b6cdfba745" Dec 13 07:00:23 crc kubenswrapper[4707]: I1213 07:00:23.773916 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:23 crc kubenswrapper[4707]: I1213 07:00:23.773993 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:23 crc kubenswrapper[4707]: I1213 07:00:23.778854 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:25 crc kubenswrapper[4707]: I1213 07:00:25.529381 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 07:00:26 crc kubenswrapper[4707]: I1213 07:00:26.906595 4707 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:27 crc kubenswrapper[4707]: I1213 07:00:27.919612 4707 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8bed4c7e-8a92-483b-9aca-47b6cdfba745" Dec 13 07:00:27 crc kubenswrapper[4707]: I1213 07:00:27.919660 4707 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8bed4c7e-8a92-483b-9aca-47b6cdfba745" Dec 13 07:00:27 crc kubenswrapper[4707]: I1213 07:00:27.927040 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:27 crc kubenswrapper[4707]: I1213 07:00:27.933303 4707 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d18867ec-c3e3-4b3a-a9e8-b569caaa3240" Dec 13 07:00:28 crc kubenswrapper[4707]: I1213 07:00:28.925540 4707 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8bed4c7e-8a92-483b-9aca-47b6cdfba745" Dec 13 07:00:28 crc kubenswrapper[4707]: I1213 07:00:28.925587 4707 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8bed4c7e-8a92-483b-9aca-47b6cdfba745" Dec 13 07:00:30 crc kubenswrapper[4707]: I1213 07:00:30.635115 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 07:00:30 crc kubenswrapper[4707]: I1213 07:00:30.638792 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 07:00:31 crc kubenswrapper[4707]: I1213 07:00:31.762908 4707 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d18867ec-c3e3-4b3a-a9e8-b569caaa3240" Dec 13 07:00:35 crc kubenswrapper[4707]: I1213 07:00:35.535453 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 13 07:00:36 crc kubenswrapper[4707]: I1213 07:00:36.650156 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 13 07:00:36 crc kubenswrapper[4707]: I1213 07:00:36.940109 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 13 07:00:37 crc kubenswrapper[4707]: I1213 07:00:37.046336 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 13 07:00:37 crc kubenswrapper[4707]: I1213 07:00:37.194293 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 07:00:37 crc kubenswrapper[4707]: I1213 07:00:37.461222 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 13 07:00:37 crc kubenswrapper[4707]: I1213 07:00:37.558371 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 13 07:00:37 crc kubenswrapper[4707]: I1213 07:00:37.865122 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 13 07:00:38 crc kubenswrapper[4707]: I1213 07:00:38.189476 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 13 07:00:38 crc kubenswrapper[4707]: I1213 07:00:38.305869 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 13 07:00:38 crc kubenswrapper[4707]: I1213 07:00:38.448434 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 13 07:00:38 crc kubenswrapper[4707]: I1213 07:00:38.631344 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 13 07:00:38 crc kubenswrapper[4707]: I1213 07:00:38.676045 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 13 07:00:38 crc kubenswrapper[4707]: I1213 07:00:38.747654 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 13 07:00:38 crc kubenswrapper[4707]: I1213 07:00:38.963938 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 13 07:00:39 crc kubenswrapper[4707]: I1213 07:00:39.202171 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 13 07:00:39 crc kubenswrapper[4707]: I1213 07:00:39.419158 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 13 07:00:39 crc kubenswrapper[4707]: I1213 07:00:39.447700 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 13 07:00:39 crc kubenswrapper[4707]: I1213 07:00:39.485162 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 13 07:00:39 crc kubenswrapper[4707]: I1213 07:00:39.499725 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 13 07:00:39 crc kubenswrapper[4707]: I1213 07:00:39.605076 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 13 07:00:39 crc kubenswrapper[4707]: I1213 07:00:39.665461 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 13 07:00:39 crc kubenswrapper[4707]: I1213 07:00:39.689933 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 13 07:00:39 crc kubenswrapper[4707]: I1213 07:00:39.749579 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 13 07:00:39 crc kubenswrapper[4707]: I1213 07:00:39.883684 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 13 07:00:39 crc kubenswrapper[4707]: I1213 07:00:39.985720 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 13 07:00:40 crc kubenswrapper[4707]: I1213 07:00:40.224344 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 13 07:00:40 crc kubenswrapper[4707]: I1213 07:00:40.253102 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 13 07:00:40 crc kubenswrapper[4707]: I1213 07:00:40.299863 4707 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 13 07:00:40 crc kubenswrapper[4707]: I1213 07:00:40.334140 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 13 07:00:40 crc kubenswrapper[4707]: I1213 07:00:40.503226 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 13 07:00:40 crc kubenswrapper[4707]: I1213 07:00:40.531457 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 13 07:00:40 crc kubenswrapper[4707]: I1213 07:00:40.549926 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 13 07:00:40 crc kubenswrapper[4707]: I1213 07:00:40.589733 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 13 07:00:40 crc kubenswrapper[4707]: I1213 07:00:40.653025 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 13 07:00:40 crc kubenswrapper[4707]: I1213 07:00:40.668278 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 13 07:00:40 crc kubenswrapper[4707]: I1213 07:00:40.767773 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 13 07:00:40 crc kubenswrapper[4707]: I1213 07:00:40.801578 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 13 07:00:40 crc kubenswrapper[4707]: I1213 07:00:40.852161 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 13 07:00:40 crc kubenswrapper[4707]: I1213 07:00:40.987627 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.064940 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.095075 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.163538 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.238609 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.267518 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.370769 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.459210 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.530923 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.534047 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.552703 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.559732 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.581301 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.613689 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.626112 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.738095 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.760242 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.769933 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.795432 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.820807 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.841184 4707 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.907973 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.969068 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 13 07:00:41 crc kubenswrapper[4707]: I1213 07:00:41.970330 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 07:00:42 crc kubenswrapper[4707]: I1213 07:00:42.003084 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 13 07:00:42 crc kubenswrapper[4707]: I1213 07:00:42.016648 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 13 07:00:42 crc kubenswrapper[4707]: I1213 07:00:42.075485 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 13 07:00:42 crc kubenswrapper[4707]: I1213 07:00:42.129248 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 13 07:00:42 crc kubenswrapper[4707]: I1213 07:00:42.186297 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 13 07:00:42 crc kubenswrapper[4707]: I1213 07:00:42.229230 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 13 07:00:42 crc kubenswrapper[4707]: I1213 07:00:42.235752 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 13 07:00:42 crc kubenswrapper[4707]: I1213 07:00:42.292231 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 13 07:00:42 crc kubenswrapper[4707]: I1213 07:00:42.352946 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 13 07:00:42 crc kubenswrapper[4707]: I1213 07:00:42.405264 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 13 07:00:42 crc kubenswrapper[4707]: I1213 07:00:42.582830 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 13 07:00:42 crc kubenswrapper[4707]: I1213 07:00:42.785573 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 13 07:00:42 crc kubenswrapper[4707]: I1213 07:00:42.832423 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 13 07:00:42 crc kubenswrapper[4707]: I1213 07:00:42.897823 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 13 07:00:42 crc kubenswrapper[4707]: I1213 07:00:42.942575 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 13 07:00:42 crc kubenswrapper[4707]: I1213 07:00:42.970776 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.020581 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.032373 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.048419 4707 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.050596 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=39.050581759 podStartE2EDuration="39.050581759s" podCreationTimestamp="2025-12-13 07:00:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 07:00:26.644188809 +0000 UTC m=+335.141367856" watchObservedRunningTime="2025-12-13 07:00:43.050581759 +0000 UTC m=+351.547760806" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.058366 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.058456 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.068191 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.069321 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.117272 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.117254775 podStartE2EDuration="17.117254775s" podCreationTimestamp="2025-12-13 07:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 07:00:43.084711797 +0000 UTC m=+351.581890844" watchObservedRunningTime="2025-12-13 07:00:43.117254775 +0000 UTC m=+351.614433822" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.126016 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.156181 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nlwlz"] Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.162998 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.170231 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.188280 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.239198 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.432769 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.513399 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.582506 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.588255 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.610542 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.638167 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.641722 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.652708 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.662415 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.704692 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.722036 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.728997 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.795000 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.899971 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.935324 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.949990 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 13 07:00:43 crc kubenswrapper[4707]: I1213 07:00:43.984656 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 13 07:00:44 crc kubenswrapper[4707]: I1213 07:00:44.029164 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 13 07:00:44 crc kubenswrapper[4707]: I1213 07:00:44.072832 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 13 07:00:44 crc kubenswrapper[4707]: I1213 07:00:44.370498 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 13 07:00:44 crc kubenswrapper[4707]: I1213 07:00:44.403638 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 13 07:00:44 crc kubenswrapper[4707]: I1213 07:00:44.405468 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 13 07:00:44 crc kubenswrapper[4707]: I1213 07:00:44.729386 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 13 07:00:44 crc kubenswrapper[4707]: I1213 07:00:44.799140 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 13 07:00:44 crc kubenswrapper[4707]: I1213 07:00:44.807508 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 13 07:00:44 crc kubenswrapper[4707]: I1213 07:00:44.948217 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 07:00:44 crc kubenswrapper[4707]: I1213 07:00:44.994130 4707 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 13 07:00:44 crc kubenswrapper[4707]: I1213 07:00:44.999043 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.047567 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.107164 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.146815 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.155714 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.207706 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.207748 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.298487 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.319530 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.360174 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.437757 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.445963 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.473633 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.510579 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.532609 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.573180 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.665093 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.673323 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.704012 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.782825 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.934148 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.985772 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 13 07:00:45 crc kubenswrapper[4707]: I1213 07:00:45.987465 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.010924 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.079112 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.155936 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.270924 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.278275 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.306101 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.335102 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.347323 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.347410 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.393431 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.406331 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.549339 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.555026 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.605840 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.636326 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.678802 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.697836 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.715912 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.784125 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.792883 4707 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.868210 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.882369 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.937132 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 13 07:00:46 crc kubenswrapper[4707]: I1213 07:00:46.946381 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 13 07:00:47 crc kubenswrapper[4707]: I1213 07:00:47.010336 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 13 07:00:47 crc kubenswrapper[4707]: I1213 07:00:47.272305 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 13 07:00:47 crc kubenswrapper[4707]: I1213 07:00:47.311824 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 13 07:00:47 crc kubenswrapper[4707]: I1213 07:00:47.451311 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 13 07:00:47 crc kubenswrapper[4707]: I1213 07:00:47.562103 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 13 07:00:47 crc kubenswrapper[4707]: I1213 07:00:47.625750 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 13 07:00:47 crc kubenswrapper[4707]: I1213 07:00:47.636609 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 13 07:00:47 crc kubenswrapper[4707]: I1213 07:00:47.704827 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 13 07:00:47 crc kubenswrapper[4707]: I1213 07:00:47.756296 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 13 07:00:47 crc kubenswrapper[4707]: I1213 07:00:47.758372 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 13 07:00:47 crc kubenswrapper[4707]: I1213 07:00:47.761452 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 13 07:00:47 crc kubenswrapper[4707]: I1213 07:00:47.795079 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 13 07:00:47 crc kubenswrapper[4707]: I1213 07:00:47.872151 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 13 07:00:47 crc kubenswrapper[4707]: I1213 07:00:47.874472 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 13 07:00:47 crc kubenswrapper[4707]: I1213 07:00:47.945558 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 13 07:00:47 crc kubenswrapper[4707]: I1213 07:00:47.946713 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.016414 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.102160 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.121690 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.127680 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.133860 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.155613 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.218182 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.331021 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.356142 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.452607 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.567223 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.595452 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.613820 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.667729 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.678738 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.706209 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.774876 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.828071 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.846074 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.858365 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.863731 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 13 07:00:48 crc kubenswrapper[4707]: I1213 07:00:48.959470 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.042007 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.213173 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.242051 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.325263 4707 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.325564 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://04cb92adb7dfde78a00204640604c72795ca67db2828c5428844ea518d46102a" gracePeriod=5 Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.328792 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.401220 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.412649 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.484966 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.500390 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.531903 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.591060 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.596670 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.604012 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.761636 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.808936 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.855047 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 13 07:00:49 crc kubenswrapper[4707]: I1213 07:00:49.864739 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 13 07:00:50 crc kubenswrapper[4707]: I1213 07:00:50.006291 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 13 07:00:50 crc kubenswrapper[4707]: I1213 07:00:50.069828 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 13 07:00:50 crc kubenswrapper[4707]: I1213 07:00:50.120843 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 13 07:00:50 crc kubenswrapper[4707]: I1213 07:00:50.180419 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 07:00:50 crc kubenswrapper[4707]: I1213 07:00:50.589564 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 13 07:00:50 crc kubenswrapper[4707]: I1213 07:00:50.593910 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 13 07:00:50 crc kubenswrapper[4707]: I1213 07:00:50.638039 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 13 07:00:50 crc kubenswrapper[4707]: I1213 07:00:50.653055 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 13 07:00:50 crc kubenswrapper[4707]: I1213 07:00:50.675198 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 13 07:00:50 crc kubenswrapper[4707]: I1213 07:00:50.679628 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 13 07:00:50 crc kubenswrapper[4707]: I1213 07:00:50.743277 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 13 07:00:50 crc kubenswrapper[4707]: I1213 07:00:50.785551 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 13 07:00:50 crc kubenswrapper[4707]: I1213 07:00:50.816664 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 13 07:00:50 crc kubenswrapper[4707]: I1213 07:00:50.879810 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 13 07:00:50 crc kubenswrapper[4707]: I1213 07:00:50.913353 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 13 07:00:51 crc kubenswrapper[4707]: I1213 07:00:51.112185 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 13 07:00:51 crc kubenswrapper[4707]: I1213 07:00:51.139045 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 07:00:51 crc kubenswrapper[4707]: I1213 07:00:51.350057 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 13 07:00:51 crc kubenswrapper[4707]: I1213 07:00:51.378871 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 13 07:00:51 crc kubenswrapper[4707]: I1213 07:00:51.475883 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 07:00:51 crc kubenswrapper[4707]: I1213 07:00:51.577707 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 13 07:00:51 crc kubenswrapper[4707]: I1213 07:00:51.829066 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 07:00:51 crc kubenswrapper[4707]: I1213 07:00:51.843932 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 13 07:00:51 crc kubenswrapper[4707]: I1213 07:00:51.904739 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 13 07:00:51 crc kubenswrapper[4707]: I1213 07:00:51.946596 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 13 07:00:52 crc kubenswrapper[4707]: I1213 07:00:52.065887 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 13 07:00:52 crc kubenswrapper[4707]: I1213 07:00:52.069745 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 13 07:00:52 crc kubenswrapper[4707]: I1213 07:00:52.213312 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 13 07:00:52 crc kubenswrapper[4707]: I1213 07:00:52.273913 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 13 07:00:52 crc kubenswrapper[4707]: I1213 07:00:52.296107 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 13 07:00:52 crc kubenswrapper[4707]: I1213 07:00:52.312895 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 13 07:00:52 crc kubenswrapper[4707]: I1213 07:00:52.341300 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 13 07:00:52 crc kubenswrapper[4707]: I1213 07:00:52.911310 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 13 07:00:53 crc kubenswrapper[4707]: I1213 07:00:53.074392 4707 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 13 07:00:53 crc kubenswrapper[4707]: I1213 07:00:53.131231 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 13 07:00:53 crc kubenswrapper[4707]: I1213 07:00:53.564658 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 13 07:00:53 crc kubenswrapper[4707]: I1213 07:00:53.584611 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 13 07:00:54 crc kubenswrapper[4707]: I1213 07:00:54.900229 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 13 07:00:54 crc kubenswrapper[4707]: I1213 07:00:54.900308 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.063120 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.063167 4707 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="04cb92adb7dfde78a00204640604c72795ca67db2828c5428844ea518d46102a" exitCode=137 Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.063207 4707 scope.go:117] "RemoveContainer" containerID="04cb92adb7dfde78a00204640604c72795ca67db2828c5428844ea518d46102a" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.063264 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.063614 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.063656 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.063685 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.063732 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.063754 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.063787 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.063749 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.063778 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.063798 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.064121 4707 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.064157 4707 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.064167 4707 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.064178 4707 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.073304 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.118811 4707 scope.go:117] "RemoveContainer" containerID="04cb92adb7dfde78a00204640604c72795ca67db2828c5428844ea518d46102a" Dec 13 07:00:55 crc kubenswrapper[4707]: E1213 07:00:55.119382 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04cb92adb7dfde78a00204640604c72795ca67db2828c5428844ea518d46102a\": container with ID starting with 04cb92adb7dfde78a00204640604c72795ca67db2828c5428844ea518d46102a not found: ID does not exist" containerID="04cb92adb7dfde78a00204640604c72795ca67db2828c5428844ea518d46102a" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.119425 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04cb92adb7dfde78a00204640604c72795ca67db2828c5428844ea518d46102a"} err="failed to get container status \"04cb92adb7dfde78a00204640604c72795ca67db2828c5428844ea518d46102a\": rpc error: code = NotFound desc = could not find container \"04cb92adb7dfde78a00204640604c72795ca67db2828c5428844ea518d46102a\": container with ID starting with 04cb92adb7dfde78a00204640604c72795ca67db2828c5428844ea518d46102a not found: ID does not exist" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.165230 4707 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.752695 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.753059 4707 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.764329 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.764377 4707 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4f9789b1-f93a-48b0-acfc-0ec8ae51529e" Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.769411 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 13 07:00:55 crc kubenswrapper[4707]: I1213 07:00:55.769694 4707 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4f9789b1-f93a-48b0-acfc-0ec8ae51529e" Dec 13 07:01:08 crc kubenswrapper[4707]: I1213 07:01:08.187747 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" podUID="5c02287e-e126-49cd-94a2-5e0596607990" containerName="oauth-openshift" containerID="cri-o://1e44ea01558b30004cb5df6669f607b61e744159447b820e6aeb9d8dfcfa3871" gracePeriod=15 Dec 13 07:01:08 crc kubenswrapper[4707]: I1213 07:01:08.588209 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57b4c9f74-mlng8"] Dec 13 07:01:08 crc kubenswrapper[4707]: I1213 07:01:08.588454 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" podUID="d48505e5-0a2d-4f70-b105-7c180700351c" containerName="controller-manager" containerID="cri-o://cffa5448cb43bcd05c35fc8e050b9fbe37c46df4b8fad664fa91b8f7b183c6d0" gracePeriod=30 Dec 13 07:01:08 crc kubenswrapper[4707]: I1213 07:01:08.689847 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7"] Dec 13 07:01:08 crc kubenswrapper[4707]: I1213 07:01:08.690074 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" podUID="28b006c3-ebd2-4b65-a4fe-588bfc060fbc" containerName="route-controller-manager" containerID="cri-o://8671f09f1e6e751efc9309c95a9c0f06aca2f36c4a80915ef8273e31bb924917" gracePeriod=30 Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.000180 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.055993 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.130935 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-service-ca\") pod \"5c02287e-e126-49cd-94a2-5e0596607990\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.130994 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-proxy-ca-bundles\") pod \"d48505e5-0a2d-4f70-b105-7c180700351c\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.131028 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-serving-cert\") pod \"5c02287e-e126-49cd-94a2-5e0596607990\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.131049 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-trusted-ca-bundle\") pod \"5c02287e-e126-49cd-94a2-5e0596607990\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.131073 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-ocp-branding-template\") pod \"5c02287e-e126-49cd-94a2-5e0596607990\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.131863 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5c02287e-e126-49cd-94a2-5e0596607990" (UID: "5c02287e-e126-49cd-94a2-5e0596607990"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.131900 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d48505e5-0a2d-4f70-b105-7c180700351c" (UID: "d48505e5-0a2d-4f70-b105-7c180700351c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.131900 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5c02287e-e126-49cd-94a2-5e0596607990" (UID: "5c02287e-e126-49cd-94a2-5e0596607990"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.131094 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-cliconfig\") pod \"5c02287e-e126-49cd-94a2-5e0596607990\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132019 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-error\") pod \"5c02287e-e126-49cd-94a2-5e0596607990\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132048 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-login\") pod \"5c02287e-e126-49cd-94a2-5e0596607990\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132090 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48505e5-0a2d-4f70-b105-7c180700351c-serving-cert\") pod \"d48505e5-0a2d-4f70-b105-7c180700351c\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132130 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-session\") pod \"5c02287e-e126-49cd-94a2-5e0596607990\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132163 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-idp-0-file-data\") pod \"5c02287e-e126-49cd-94a2-5e0596607990\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132192 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c02287e-e126-49cd-94a2-5e0596607990-audit-dir\") pod \"5c02287e-e126-49cd-94a2-5e0596607990\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132207 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-config\") pod \"d48505e5-0a2d-4f70-b105-7c180700351c\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132235 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpnfd\" (UniqueName: \"kubernetes.io/projected/5c02287e-e126-49cd-94a2-5e0596607990-kube-api-access-mpnfd\") pod \"5c02287e-e126-49cd-94a2-5e0596607990\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132276 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-provider-selection\") pod \"5c02287e-e126-49cd-94a2-5e0596607990\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132304 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-client-ca\") pod \"d48505e5-0a2d-4f70-b105-7c180700351c\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132333 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-audit-policies\") pod \"5c02287e-e126-49cd-94a2-5e0596607990\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132353 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-router-certs\") pod \"5c02287e-e126-49cd-94a2-5e0596607990\" (UID: \"5c02287e-e126-49cd-94a2-5e0596607990\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132380 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w55p9\" (UniqueName: \"kubernetes.io/projected/d48505e5-0a2d-4f70-b105-7c180700351c-kube-api-access-w55p9\") pod \"d48505e5-0a2d-4f70-b105-7c180700351c\" (UID: \"d48505e5-0a2d-4f70-b105-7c180700351c\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132670 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132684 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132693 4707 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.132940 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c02287e-e126-49cd-94a2-5e0596607990-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5c02287e-e126-49cd-94a2-5e0596607990" (UID: "5c02287e-e126-49cd-94a2-5e0596607990"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.134576 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5c02287e-e126-49cd-94a2-5e0596607990" (UID: "5c02287e-e126-49cd-94a2-5e0596607990"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.136646 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5c02287e-e126-49cd-94a2-5e0596607990" (UID: "5c02287e-e126-49cd-94a2-5e0596607990"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.136823 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5c02287e-e126-49cd-94a2-5e0596607990" (UID: "5c02287e-e126-49cd-94a2-5e0596607990"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.136831 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d48505e5-0a2d-4f70-b105-7c180700351c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d48505e5-0a2d-4f70-b105-7c180700351c" (UID: "d48505e5-0a2d-4f70-b105-7c180700351c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.137279 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5c02287e-e126-49cd-94a2-5e0596607990" (UID: "5c02287e-e126-49cd-94a2-5e0596607990"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.137339 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d48505e5-0a2d-4f70-b105-7c180700351c-kube-api-access-w55p9" (OuterVolumeSpecName: "kube-api-access-w55p9") pod "d48505e5-0a2d-4f70-b105-7c180700351c" (UID: "d48505e5-0a2d-4f70-b105-7c180700351c"). InnerVolumeSpecName "kube-api-access-w55p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.137487 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-config" (OuterVolumeSpecName: "config") pod "d48505e5-0a2d-4f70-b105-7c180700351c" (UID: "d48505e5-0a2d-4f70-b105-7c180700351c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.137757 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-client-ca" (OuterVolumeSpecName: "client-ca") pod "d48505e5-0a2d-4f70-b105-7c180700351c" (UID: "d48505e5-0a2d-4f70-b105-7c180700351c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.138679 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5c02287e-e126-49cd-94a2-5e0596607990" (UID: "5c02287e-e126-49cd-94a2-5e0596607990"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.139829 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5c02287e-e126-49cd-94a2-5e0596607990" (UID: "5c02287e-e126-49cd-94a2-5e0596607990"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.141055 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5c02287e-e126-49cd-94a2-5e0596607990" (UID: "5c02287e-e126-49cd-94a2-5e0596607990"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.141073 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c02287e-e126-49cd-94a2-5e0596607990-kube-api-access-mpnfd" (OuterVolumeSpecName: "kube-api-access-mpnfd") pod "5c02287e-e126-49cd-94a2-5e0596607990" (UID: "5c02287e-e126-49cd-94a2-5e0596607990"). InnerVolumeSpecName "kube-api-access-mpnfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.141341 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5c02287e-e126-49cd-94a2-5e0596607990" (UID: "5c02287e-e126-49cd-94a2-5e0596607990"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.142279 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.143689 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5c02287e-e126-49cd-94a2-5e0596607990" (UID: "5c02287e-e126-49cd-94a2-5e0596607990"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.153648 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5c02287e-e126-49cd-94a2-5e0596607990" (UID: "5c02287e-e126-49cd-94a2-5e0596607990"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.154670 4707 generic.go:334] "Generic (PLEG): container finished" podID="d48505e5-0a2d-4f70-b105-7c180700351c" containerID="cffa5448cb43bcd05c35fc8e050b9fbe37c46df4b8fad664fa91b8f7b183c6d0" exitCode=0 Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.154749 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" event={"ID":"d48505e5-0a2d-4f70-b105-7c180700351c","Type":"ContainerDied","Data":"cffa5448cb43bcd05c35fc8e050b9fbe37c46df4b8fad664fa91b8f7b183c6d0"} Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.154776 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" event={"ID":"d48505e5-0a2d-4f70-b105-7c180700351c","Type":"ContainerDied","Data":"d5ff24a46d80ae71a587c7fe1140866ba78a027bf59034505a516b971c45fafb"} Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.154793 4707 scope.go:117] "RemoveContainer" containerID="cffa5448cb43bcd05c35fc8e050b9fbe37c46df4b8fad664fa91b8f7b183c6d0" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.154884 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b4c9f74-mlng8" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.159387 4707 generic.go:334] "Generic (PLEG): container finished" podID="5c02287e-e126-49cd-94a2-5e0596607990" containerID="1e44ea01558b30004cb5df6669f607b61e744159447b820e6aeb9d8dfcfa3871" exitCode=0 Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.159480 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.161065 4707 generic.go:334] "Generic (PLEG): container finished" podID="28b006c3-ebd2-4b65-a4fe-588bfc060fbc" containerID="8671f09f1e6e751efc9309c95a9c0f06aca2f36c4a80915ef8273e31bb924917" exitCode=0 Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.161196 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.161425 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" event={"ID":"5c02287e-e126-49cd-94a2-5e0596607990","Type":"ContainerDied","Data":"1e44ea01558b30004cb5df6669f607b61e744159447b820e6aeb9d8dfcfa3871"} Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.161606 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nlwlz" event={"ID":"5c02287e-e126-49cd-94a2-5e0596607990","Type":"ContainerDied","Data":"fbb0b3763fb3867b169a60f79c9320a442c7ec02650f24cd253d82a36fb51e3d"} Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.161680 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" event={"ID":"28b006c3-ebd2-4b65-a4fe-588bfc060fbc","Type":"ContainerDied","Data":"8671f09f1e6e751efc9309c95a9c0f06aca2f36c4a80915ef8273e31bb924917"} Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.161780 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7" event={"ID":"28b006c3-ebd2-4b65-a4fe-588bfc060fbc","Type":"ContainerDied","Data":"6f864902f039cd7621a8f7ea72c6777fda76078d75e41fe7a6bbcc544320b493"} Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.181836 4707 scope.go:117] "RemoveContainer" containerID="cffa5448cb43bcd05c35fc8e050b9fbe37c46df4b8fad664fa91b8f7b183c6d0" Dec 13 07:01:09 crc kubenswrapper[4707]: E1213 07:01:09.182362 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cffa5448cb43bcd05c35fc8e050b9fbe37c46df4b8fad664fa91b8f7b183c6d0\": container with ID starting with cffa5448cb43bcd05c35fc8e050b9fbe37c46df4b8fad664fa91b8f7b183c6d0 not found: ID does not exist" containerID="cffa5448cb43bcd05c35fc8e050b9fbe37c46df4b8fad664fa91b8f7b183c6d0" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.182387 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cffa5448cb43bcd05c35fc8e050b9fbe37c46df4b8fad664fa91b8f7b183c6d0"} err="failed to get container status \"cffa5448cb43bcd05c35fc8e050b9fbe37c46df4b8fad664fa91b8f7b183c6d0\": rpc error: code = NotFound desc = could not find container \"cffa5448cb43bcd05c35fc8e050b9fbe37c46df4b8fad664fa91b8f7b183c6d0\": container with ID starting with cffa5448cb43bcd05c35fc8e050b9fbe37c46df4b8fad664fa91b8f7b183c6d0 not found: ID does not exist" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.182408 4707 scope.go:117] "RemoveContainer" containerID="1e44ea01558b30004cb5df6669f607b61e744159447b820e6aeb9d8dfcfa3871" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.196337 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57b4c9f74-mlng8"] Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.200544 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-57b4c9f74-mlng8"] Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.209632 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nlwlz"] Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.210985 4707 scope.go:117] "RemoveContainer" containerID="1e44ea01558b30004cb5df6669f607b61e744159447b820e6aeb9d8dfcfa3871" Dec 13 07:01:09 crc kubenswrapper[4707]: E1213 07:01:09.211743 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e44ea01558b30004cb5df6669f607b61e744159447b820e6aeb9d8dfcfa3871\": container with ID starting with 1e44ea01558b30004cb5df6669f607b61e744159447b820e6aeb9d8dfcfa3871 not found: ID does not exist" containerID="1e44ea01558b30004cb5df6669f607b61e744159447b820e6aeb9d8dfcfa3871" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.211767 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e44ea01558b30004cb5df6669f607b61e744159447b820e6aeb9d8dfcfa3871"} err="failed to get container status \"1e44ea01558b30004cb5df6669f607b61e744159447b820e6aeb9d8dfcfa3871\": rpc error: code = NotFound desc = could not find container \"1e44ea01558b30004cb5df6669f607b61e744159447b820e6aeb9d8dfcfa3871\": container with ID starting with 1e44ea01558b30004cb5df6669f607b61e744159447b820e6aeb9d8dfcfa3871 not found: ID does not exist" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.211787 4707 scope.go:117] "RemoveContainer" containerID="8671f09f1e6e751efc9309c95a9c0f06aca2f36c4a80915ef8273e31bb924917" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.213994 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nlwlz"] Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.225795 4707 scope.go:117] "RemoveContainer" containerID="8671f09f1e6e751efc9309c95a9c0f06aca2f36c4a80915ef8273e31bb924917" Dec 13 07:01:09 crc kubenswrapper[4707]: E1213 07:01:09.226240 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8671f09f1e6e751efc9309c95a9c0f06aca2f36c4a80915ef8273e31bb924917\": container with ID starting with 8671f09f1e6e751efc9309c95a9c0f06aca2f36c4a80915ef8273e31bb924917 not found: ID does not exist" containerID="8671f09f1e6e751efc9309c95a9c0f06aca2f36c4a80915ef8273e31bb924917" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.226271 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8671f09f1e6e751efc9309c95a9c0f06aca2f36c4a80915ef8273e31bb924917"} err="failed to get container status \"8671f09f1e6e751efc9309c95a9c0f06aca2f36c4a80915ef8273e31bb924917\": rpc error: code = NotFound desc = could not find container \"8671f09f1e6e751efc9309c95a9c0f06aca2f36c4a80915ef8273e31bb924917\": container with ID starting with 8671f09f1e6e751efc9309c95a9c0f06aca2f36c4a80915ef8273e31bb924917 not found: ID does not exist" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.233860 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-config\") pod \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\" (UID: \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.234044 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-serving-cert\") pod \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\" (UID: \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.234162 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jrfx\" (UniqueName: \"kubernetes.io/projected/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-kube-api-access-7jrfx\") pod \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\" (UID: \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.234253 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-client-ca\") pod \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\" (UID: \"28b006c3-ebd2-4b65-a4fe-588bfc060fbc\") " Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.234509 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.234589 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.234661 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.234753 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.234819 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-client-ca" (OuterVolumeSpecName: "client-ca") pod "28b006c3-ebd2-4b65-a4fe-588bfc060fbc" (UID: "28b006c3-ebd2-4b65-a4fe-588bfc060fbc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.234834 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.234896 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d48505e5-0a2d-4f70-b105-7c180700351c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.234910 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.234922 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.234921 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-config" (OuterVolumeSpecName: "config") pod "28b006c3-ebd2-4b65-a4fe-588bfc060fbc" (UID: "28b006c3-ebd2-4b65-a4fe-588bfc060fbc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.234932 4707 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c02287e-e126-49cd-94a2-5e0596607990-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.235005 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-config\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.235218 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpnfd\" (UniqueName: \"kubernetes.io/projected/5c02287e-e126-49cd-94a2-5e0596607990-kube-api-access-mpnfd\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.235235 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.235247 4707 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d48505e5-0a2d-4f70-b105-7c180700351c-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.235260 4707 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c02287e-e126-49cd-94a2-5e0596607990-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.235271 4707 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5c02287e-e126-49cd-94a2-5e0596607990-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.235285 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w55p9\" (UniqueName: \"kubernetes.io/projected/d48505e5-0a2d-4f70-b105-7c180700351c-kube-api-access-w55p9\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.236793 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "28b006c3-ebd2-4b65-a4fe-588bfc060fbc" (UID: "28b006c3-ebd2-4b65-a4fe-588bfc060fbc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.236996 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-kube-api-access-7jrfx" (OuterVolumeSpecName: "kube-api-access-7jrfx") pod "28b006c3-ebd2-4b65-a4fe-588bfc060fbc" (UID: "28b006c3-ebd2-4b65-a4fe-588bfc060fbc"). InnerVolumeSpecName "kube-api-access-7jrfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.338615 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-config\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.338651 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.338664 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jrfx\" (UniqueName: \"kubernetes.io/projected/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-kube-api-access-7jrfx\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.338675 4707 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28b006c3-ebd2-4b65-a4fe-588bfc060fbc-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.491830 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7"] Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.496942 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-575f8f4845-zsdj7"] Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.690633 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54b8cc7d68-txxsc"] Dec 13 07:01:09 crc kubenswrapper[4707]: E1213 07:01:09.690860 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28b006c3-ebd2-4b65-a4fe-588bfc060fbc" containerName="route-controller-manager" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.690878 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="28b006c3-ebd2-4b65-a4fe-588bfc060fbc" containerName="route-controller-manager" Dec 13 07:01:09 crc kubenswrapper[4707]: E1213 07:01:09.690893 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6767a5-947b-454d-80b7-585d10aebb4f" containerName="installer" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.690904 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6767a5-947b-454d-80b7-585d10aebb4f" containerName="installer" Dec 13 07:01:09 crc kubenswrapper[4707]: E1213 07:01:09.690931 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d48505e5-0a2d-4f70-b105-7c180700351c" containerName="controller-manager" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.690940 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="d48505e5-0a2d-4f70-b105-7c180700351c" containerName="controller-manager" Dec 13 07:01:09 crc kubenswrapper[4707]: E1213 07:01:09.690981 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.690991 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 13 07:01:09 crc kubenswrapper[4707]: E1213 07:01:09.691002 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c02287e-e126-49cd-94a2-5e0596607990" containerName="oauth-openshift" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.691009 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c02287e-e126-49cd-94a2-5e0596607990" containerName="oauth-openshift" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.691116 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="28b006c3-ebd2-4b65-a4fe-588bfc060fbc" containerName="route-controller-manager" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.691132 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c02287e-e126-49cd-94a2-5e0596607990" containerName="oauth-openshift" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.691151 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6767a5-947b-454d-80b7-585d10aebb4f" containerName="installer" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.691159 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.691168 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="d48505e5-0a2d-4f70-b105-7c180700351c" containerName="controller-manager" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.691624 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.695818 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.695824 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.695989 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.696335 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.697708 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.698596 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.711577 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54b8cc7d68-txxsc"] Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.714777 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.752760 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28b006c3-ebd2-4b65-a4fe-588bfc060fbc" path="/var/lib/kubelet/pods/28b006c3-ebd2-4b65-a4fe-588bfc060fbc/volumes" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.753422 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c02287e-e126-49cd-94a2-5e0596607990" path="/var/lib/kubelet/pods/5c02287e-e126-49cd-94a2-5e0596607990/volumes" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.754039 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d48505e5-0a2d-4f70-b105-7c180700351c" path="/var/lib/kubelet/pods/d48505e5-0a2d-4f70-b105-7c180700351c/volumes" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.780798 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5czf\" (UniqueName: \"kubernetes.io/projected/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-kube-api-access-c5czf\") pod \"controller-manager-54b8cc7d68-txxsc\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.780842 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-config\") pod \"controller-manager-54b8cc7d68-txxsc\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.780896 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-proxy-ca-bundles\") pod \"controller-manager-54b8cc7d68-txxsc\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.780924 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-client-ca\") pod \"controller-manager-54b8cc7d68-txxsc\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.780941 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-serving-cert\") pod \"controller-manager-54b8cc7d68-txxsc\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.881702 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-proxy-ca-bundles\") pod \"controller-manager-54b8cc7d68-txxsc\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.882231 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-client-ca\") pod \"controller-manager-54b8cc7d68-txxsc\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.882269 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-serving-cert\") pod \"controller-manager-54b8cc7d68-txxsc\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.882386 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5czf\" (UniqueName: \"kubernetes.io/projected/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-kube-api-access-c5czf\") pod \"controller-manager-54b8cc7d68-txxsc\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.882411 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-config\") pod \"controller-manager-54b8cc7d68-txxsc\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.882911 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-proxy-ca-bundles\") pod \"controller-manager-54b8cc7d68-txxsc\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.883577 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-config\") pod \"controller-manager-54b8cc7d68-txxsc\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.883760 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-client-ca\") pod \"controller-manager-54b8cc7d68-txxsc\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.894724 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-serving-cert\") pod \"controller-manager-54b8cc7d68-txxsc\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:09 crc kubenswrapper[4707]: I1213 07:01:09.898580 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5czf\" (UniqueName: \"kubernetes.io/projected/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-kube-api-access-c5czf\") pod \"controller-manager-54b8cc7d68-txxsc\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.008064 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.397251 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54b8cc7d68-txxsc"] Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.690519 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk"] Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.691426 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.693582 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.693718 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.693705 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.693876 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.693921 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.694814 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.703332 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk"] Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.892109 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-client-ca\") pod \"route-controller-manager-5f7f8769d4-7fkvk\" (UID: \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.892157 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-serving-cert\") pod \"route-controller-manager-5f7f8769d4-7fkvk\" (UID: \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.892184 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvnx9\" (UniqueName: \"kubernetes.io/projected/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-kube-api-access-rvnx9\") pod \"route-controller-manager-5f7f8769d4-7fkvk\" (UID: \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.892427 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-config\") pod \"route-controller-manager-5f7f8769d4-7fkvk\" (UID: \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.993517 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-config\") pod \"route-controller-manager-5f7f8769d4-7fkvk\" (UID: \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.993586 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-client-ca\") pod \"route-controller-manager-5f7f8769d4-7fkvk\" (UID: \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.993609 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-serving-cert\") pod \"route-controller-manager-5f7f8769d4-7fkvk\" (UID: \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.994377 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvnx9\" (UniqueName: \"kubernetes.io/projected/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-kube-api-access-rvnx9\") pod \"route-controller-manager-5f7f8769d4-7fkvk\" (UID: \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.994555 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-client-ca\") pod \"route-controller-manager-5f7f8769d4-7fkvk\" (UID: \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.994887 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-config\") pod \"route-controller-manager-5f7f8769d4-7fkvk\" (UID: \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:10 crc kubenswrapper[4707]: I1213 07:01:10.998611 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-serving-cert\") pod \"route-controller-manager-5f7f8769d4-7fkvk\" (UID: \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:11 crc kubenswrapper[4707]: I1213 07:01:11.022325 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvnx9\" (UniqueName: \"kubernetes.io/projected/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-kube-api-access-rvnx9\") pod \"route-controller-manager-5f7f8769d4-7fkvk\" (UID: \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:11 crc kubenswrapper[4707]: I1213 07:01:11.060984 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:11 crc kubenswrapper[4707]: I1213 07:01:11.188572 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" event={"ID":"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8","Type":"ContainerStarted","Data":"c787f74c944c4df1790cec97ca143ed954be424b9cf0ddd495a683e6d35d1b12"} Dec 13 07:01:11 crc kubenswrapper[4707]: I1213 07:01:11.188616 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" event={"ID":"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8","Type":"ContainerStarted","Data":"c4fd4aaaceb0c62b53145d9d2960daf971370740688b47fec7fc93c493c610ab"} Dec 13 07:01:11 crc kubenswrapper[4707]: I1213 07:01:11.189122 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:11 crc kubenswrapper[4707]: I1213 07:01:11.196611 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:11 crc kubenswrapper[4707]: I1213 07:01:11.243739 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" podStartSLOduration=3.243720743 podStartE2EDuration="3.243720743s" podCreationTimestamp="2025-12-13 07:01:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 07:01:11.213173814 +0000 UTC m=+379.710352861" watchObservedRunningTime="2025-12-13 07:01:11.243720743 +0000 UTC m=+379.740899780" Dec 13 07:01:11 crc kubenswrapper[4707]: I1213 07:01:11.335348 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk"] Dec 13 07:01:11 crc kubenswrapper[4707]: W1213 07:01:11.340757 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod243f50b2_ef8e_48f4_805d_1bf8ee11f86e.slice/crio-735b23c95fb1990b02f0c20c1fee9961c4eafb5f570bae16b1abddc59819856a WatchSource:0}: Error finding container 735b23c95fb1990b02f0c20c1fee9961c4eafb5f570bae16b1abddc59819856a: Status 404 returned error can't find the container with id 735b23c95fb1990b02f0c20c1fee9961c4eafb5f570bae16b1abddc59819856a Dec 13 07:01:12 crc kubenswrapper[4707]: I1213 07:01:12.195062 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" event={"ID":"243f50b2-ef8e-48f4-805d-1bf8ee11f86e","Type":"ContainerStarted","Data":"6a1e4b1c5d98d6b263a7622fd9639fb9c47513f5425c861cfe684515982ca956"} Dec 13 07:01:12 crc kubenswrapper[4707]: I1213 07:01:12.195108 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" event={"ID":"243f50b2-ef8e-48f4-805d-1bf8ee11f86e","Type":"ContainerStarted","Data":"735b23c95fb1990b02f0c20c1fee9961c4eafb5f570bae16b1abddc59819856a"} Dec 13 07:01:13 crc kubenswrapper[4707]: I1213 07:01:13.199515 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:13 crc kubenswrapper[4707]: I1213 07:01:13.205164 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:13 crc kubenswrapper[4707]: I1213 07:01:13.218584 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" podStartSLOduration=5.218563556 podStartE2EDuration="5.218563556s" podCreationTimestamp="2025-12-13 07:01:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 07:01:12.217309111 +0000 UTC m=+380.714488158" watchObservedRunningTime="2025-12-13 07:01:13.218563556 +0000 UTC m=+381.715742603" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.696335 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-bf446bd49-f6v2t"] Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.697664 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.704225 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.710593 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.712326 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.713192 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.713466 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.714124 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.714249 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.715259 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-bf446bd49-f6v2t"] Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.715838 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.716022 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.716791 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.717035 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.719128 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.722555 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.722969 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.727002 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.879922 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nskg9\" (UniqueName: \"kubernetes.io/projected/4465da5c-c042-46dc-9152-ae05e277b6de-kube-api-access-nskg9\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.879995 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.880042 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-router-certs\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.880078 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.880112 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-user-template-error\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.880133 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.880181 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.880208 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-session\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.880224 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.880243 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-user-template-login\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.880262 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4465da5c-c042-46dc-9152-ae05e277b6de-audit-dir\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.880280 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.880314 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-service-ca\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.880343 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4465da5c-c042-46dc-9152-ae05e277b6de-audit-policies\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.981553 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.981614 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-user-template-error\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.981650 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.981708 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.981736 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-session\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.981755 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.981774 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-user-template-login\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.981797 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4465da5c-c042-46dc-9152-ae05e277b6de-audit-dir\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.981819 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.981841 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-service-ca\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.981860 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4465da5c-c042-46dc-9152-ae05e277b6de-audit-policies\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.981885 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nskg9\" (UniqueName: \"kubernetes.io/projected/4465da5c-c042-46dc-9152-ae05e277b6de-kube-api-access-nskg9\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.981906 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.981923 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-router-certs\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.982575 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4465da5c-c042-46dc-9152-ae05e277b6de-audit-dir\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.983564 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.983729 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4465da5c-c042-46dc-9152-ae05e277b6de-audit-policies\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.987563 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.987627 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-service-ca\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.988513 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.989483 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-user-template-error\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.990703 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-router-certs\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.991770 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.993936 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-user-template-login\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.994474 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-session\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.995903 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:17 crc kubenswrapper[4707]: I1213 07:01:17.997719 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4465da5c-c042-46dc-9152-ae05e277b6de-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:18 crc kubenswrapper[4707]: I1213 07:01:18.003114 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nskg9\" (UniqueName: \"kubernetes.io/projected/4465da5c-c042-46dc-9152-ae05e277b6de-kube-api-access-nskg9\") pod \"oauth-openshift-bf446bd49-f6v2t\" (UID: \"4465da5c-c042-46dc-9152-ae05e277b6de\") " pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:18 crc kubenswrapper[4707]: I1213 07:01:18.017984 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:18 crc kubenswrapper[4707]: I1213 07:01:18.226722 4707 generic.go:334] "Generic (PLEG): container finished" podID="11795d95-2c15-4a44-9276-6786ec1e5cc1" containerID="dfae44c57b3f058cf596f099f0abe96a8830d565514b49373bab2aa895e17c56" exitCode=0 Dec 13 07:01:18 crc kubenswrapper[4707]: I1213 07:01:18.226794 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" event={"ID":"11795d95-2c15-4a44-9276-6786ec1e5cc1","Type":"ContainerDied","Data":"dfae44c57b3f058cf596f099f0abe96a8830d565514b49373bab2aa895e17c56"} Dec 13 07:01:18 crc kubenswrapper[4707]: I1213 07:01:18.227478 4707 scope.go:117] "RemoveContainer" containerID="dfae44c57b3f058cf596f099f0abe96a8830d565514b49373bab2aa895e17c56" Dec 13 07:01:18 crc kubenswrapper[4707]: I1213 07:01:18.452485 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-bf446bd49-f6v2t"] Dec 13 07:01:18 crc kubenswrapper[4707]: W1213 07:01:18.461119 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4465da5c_c042_46dc_9152_ae05e277b6de.slice/crio-131ae49f89284682dc443419dfb2f346e347756fbd7175c1372b9ea7bf13f422 WatchSource:0}: Error finding container 131ae49f89284682dc443419dfb2f346e347756fbd7175c1372b9ea7bf13f422: Status 404 returned error can't find the container with id 131ae49f89284682dc443419dfb2f346e347756fbd7175c1372b9ea7bf13f422 Dec 13 07:01:19 crc kubenswrapper[4707]: I1213 07:01:19.233908 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" event={"ID":"11795d95-2c15-4a44-9276-6786ec1e5cc1","Type":"ContainerStarted","Data":"ddeb4a3ad18ae5ba6a8ed98d5faf61866a9b41856737f7d7df46bee99abcf9f2"} Dec 13 07:01:19 crc kubenswrapper[4707]: I1213 07:01:19.236088 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 07:01:19 crc kubenswrapper[4707]: I1213 07:01:19.238158 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" event={"ID":"4465da5c-c042-46dc-9152-ae05e277b6de","Type":"ContainerStarted","Data":"84c314136f60935645524793e164937f9a214060ee1a2e4e858bcf4a1d2d8d3e"} Dec 13 07:01:19 crc kubenswrapper[4707]: I1213 07:01:19.238202 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" event={"ID":"4465da5c-c042-46dc-9152-ae05e277b6de","Type":"ContainerStarted","Data":"131ae49f89284682dc443419dfb2f346e347756fbd7175c1372b9ea7bf13f422"} Dec 13 07:01:19 crc kubenswrapper[4707]: I1213 07:01:19.238587 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:19 crc kubenswrapper[4707]: I1213 07:01:19.239564 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 07:01:19 crc kubenswrapper[4707]: I1213 07:01:19.286768 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" podStartSLOduration=36.286750368 podStartE2EDuration="36.286750368s" podCreationTimestamp="2025-12-13 07:00:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 07:01:19.271671754 +0000 UTC m=+387.768850801" watchObservedRunningTime="2025-12-13 07:01:19.286750368 +0000 UTC m=+387.783929415" Dec 13 07:01:19 crc kubenswrapper[4707]: I1213 07:01:19.495947 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-bf446bd49-f6v2t" Dec 13 07:01:28 crc kubenswrapper[4707]: I1213 07:01:28.583555 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54b8cc7d68-txxsc"] Dec 13 07:01:28 crc kubenswrapper[4707]: I1213 07:01:28.584181 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" podUID="b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8" containerName="controller-manager" containerID="cri-o://c787f74c944c4df1790cec97ca143ed954be424b9cf0ddd495a683e6d35d1b12" gracePeriod=30 Dec 13 07:01:28 crc kubenswrapper[4707]: I1213 07:01:28.602356 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk"] Dec 13 07:01:28 crc kubenswrapper[4707]: I1213 07:01:28.602564 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" podUID="243f50b2-ef8e-48f4-805d-1bf8ee11f86e" containerName="route-controller-manager" containerID="cri-o://6a1e4b1c5d98d6b263a7622fd9639fb9c47513f5425c861cfe684515982ca956" gracePeriod=30 Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.099721 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.149885 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-client-ca\") pod \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\" (UID: \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\") " Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.150015 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-serving-cert\") pod \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\" (UID: \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\") " Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.150156 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvnx9\" (UniqueName: \"kubernetes.io/projected/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-kube-api-access-rvnx9\") pod \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\" (UID: \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\") " Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.150235 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-config\") pod \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\" (UID: \"243f50b2-ef8e-48f4-805d-1bf8ee11f86e\") " Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.151122 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-client-ca" (OuterVolumeSpecName: "client-ca") pod "243f50b2-ef8e-48f4-805d-1bf8ee11f86e" (UID: "243f50b2-ef8e-48f4-805d-1bf8ee11f86e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.151522 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-config" (OuterVolumeSpecName: "config") pod "243f50b2-ef8e-48f4-805d-1bf8ee11f86e" (UID: "243f50b2-ef8e-48f4-805d-1bf8ee11f86e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.160771 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.161405 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-kube-api-access-rvnx9" (OuterVolumeSpecName: "kube-api-access-rvnx9") pod "243f50b2-ef8e-48f4-805d-1bf8ee11f86e" (UID: "243f50b2-ef8e-48f4-805d-1bf8ee11f86e"). InnerVolumeSpecName "kube-api-access-rvnx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.166440 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "243f50b2-ef8e-48f4-805d-1bf8ee11f86e" (UID: "243f50b2-ef8e-48f4-805d-1bf8ee11f86e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.251515 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-proxy-ca-bundles\") pod \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.251580 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-client-ca\") pod \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.251654 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5czf\" (UniqueName: \"kubernetes.io/projected/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-kube-api-access-c5czf\") pod \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.251710 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-serving-cert\") pod \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.251740 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-config\") pod \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\" (UID: \"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8\") " Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.251987 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvnx9\" (UniqueName: \"kubernetes.io/projected/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-kube-api-access-rvnx9\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.252054 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-config\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.252069 4707 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.252080 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/243f50b2-ef8e-48f4-805d-1bf8ee11f86e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.252599 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-client-ca" (OuterVolumeSpecName: "client-ca") pod "b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8" (UID: "b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.252624 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8" (UID: "b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.252792 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-config" (OuterVolumeSpecName: "config") pod "b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8" (UID: "b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.254439 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8" (UID: "b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.254510 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-kube-api-access-c5czf" (OuterVolumeSpecName: "kube-api-access-c5czf") pod "b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8" (UID: "b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8"). InnerVolumeSpecName "kube-api-access-c5czf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.294735 4707 generic.go:334] "Generic (PLEG): container finished" podID="b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8" containerID="c787f74c944c4df1790cec97ca143ed954be424b9cf0ddd495a683e6d35d1b12" exitCode=0 Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.294790 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.294790 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" event={"ID":"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8","Type":"ContainerDied","Data":"c787f74c944c4df1790cec97ca143ed954be424b9cf0ddd495a683e6d35d1b12"} Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.294850 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54b8cc7d68-txxsc" event={"ID":"b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8","Type":"ContainerDied","Data":"c4fd4aaaceb0c62b53145d9d2960daf971370740688b47fec7fc93c493c610ab"} Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.294867 4707 scope.go:117] "RemoveContainer" containerID="c787f74c944c4df1790cec97ca143ed954be424b9cf0ddd495a683e6d35d1b12" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.298402 4707 generic.go:334] "Generic (PLEG): container finished" podID="243f50b2-ef8e-48f4-805d-1bf8ee11f86e" containerID="6a1e4b1c5d98d6b263a7622fd9639fb9c47513f5425c861cfe684515982ca956" exitCode=0 Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.298441 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" event={"ID":"243f50b2-ef8e-48f4-805d-1bf8ee11f86e","Type":"ContainerDied","Data":"6a1e4b1c5d98d6b263a7622fd9639fb9c47513f5425c861cfe684515982ca956"} Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.298473 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" event={"ID":"243f50b2-ef8e-48f4-805d-1bf8ee11f86e","Type":"ContainerDied","Data":"735b23c95fb1990b02f0c20c1fee9961c4eafb5f570bae16b1abddc59819856a"} Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.298490 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.314517 4707 scope.go:117] "RemoveContainer" containerID="c787f74c944c4df1790cec97ca143ed954be424b9cf0ddd495a683e6d35d1b12" Dec 13 07:01:29 crc kubenswrapper[4707]: E1213 07:01:29.315787 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c787f74c944c4df1790cec97ca143ed954be424b9cf0ddd495a683e6d35d1b12\": container with ID starting with c787f74c944c4df1790cec97ca143ed954be424b9cf0ddd495a683e6d35d1b12 not found: ID does not exist" containerID="c787f74c944c4df1790cec97ca143ed954be424b9cf0ddd495a683e6d35d1b12" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.315858 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c787f74c944c4df1790cec97ca143ed954be424b9cf0ddd495a683e6d35d1b12"} err="failed to get container status \"c787f74c944c4df1790cec97ca143ed954be424b9cf0ddd495a683e6d35d1b12\": rpc error: code = NotFound desc = could not find container \"c787f74c944c4df1790cec97ca143ed954be424b9cf0ddd495a683e6d35d1b12\": container with ID starting with c787f74c944c4df1790cec97ca143ed954be424b9cf0ddd495a683e6d35d1b12 not found: ID does not exist" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.315888 4707 scope.go:117] "RemoveContainer" containerID="6a1e4b1c5d98d6b263a7622fd9639fb9c47513f5425c861cfe684515982ca956" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.328301 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54b8cc7d68-txxsc"] Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.334639 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54b8cc7d68-txxsc"] Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.336913 4707 scope.go:117] "RemoveContainer" containerID="6a1e4b1c5d98d6b263a7622fd9639fb9c47513f5425c861cfe684515982ca956" Dec 13 07:01:29 crc kubenswrapper[4707]: E1213 07:01:29.337495 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a1e4b1c5d98d6b263a7622fd9639fb9c47513f5425c861cfe684515982ca956\": container with ID starting with 6a1e4b1c5d98d6b263a7622fd9639fb9c47513f5425c861cfe684515982ca956 not found: ID does not exist" containerID="6a1e4b1c5d98d6b263a7622fd9639fb9c47513f5425c861cfe684515982ca956" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.337534 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a1e4b1c5d98d6b263a7622fd9639fb9c47513f5425c861cfe684515982ca956"} err="failed to get container status \"6a1e4b1c5d98d6b263a7622fd9639fb9c47513f5425c861cfe684515982ca956\": rpc error: code = NotFound desc = could not find container \"6a1e4b1c5d98d6b263a7622fd9639fb9c47513f5425c861cfe684515982ca956\": container with ID starting with 6a1e4b1c5d98d6b263a7622fd9639fb9c47513f5425c861cfe684515982ca956 not found: ID does not exist" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.339418 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk"] Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.342598 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f7f8769d4-7fkvk"] Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.353230 4707 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.353269 4707 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.353281 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5czf\" (UniqueName: \"kubernetes.io/projected/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-kube-api-access-c5czf\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.353289 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.353299 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8-config\") on node \"crc\" DevicePath \"\"" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.708943 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84995b548b-tjsfp"] Dec 13 07:01:29 crc kubenswrapper[4707]: E1213 07:01:29.709217 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8" containerName="controller-manager" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.709233 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8" containerName="controller-manager" Dec 13 07:01:29 crc kubenswrapper[4707]: E1213 07:01:29.709253 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="243f50b2-ef8e-48f4-805d-1bf8ee11f86e" containerName="route-controller-manager" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.709261 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="243f50b2-ef8e-48f4-805d-1bf8ee11f86e" containerName="route-controller-manager" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.709366 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8" containerName="controller-manager" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.709383 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="243f50b2-ef8e-48f4-805d-1bf8ee11f86e" containerName="route-controller-manager" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.709796 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.711263 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.713855 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.713982 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.714007 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.715308 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.721314 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.725784 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.730650 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm"] Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.731208 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm"] Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.731281 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.733903 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.735082 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.736620 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.738014 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.738278 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.738507 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.739284 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84995b548b-tjsfp"] Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.755890 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="243f50b2-ef8e-48f4-805d-1bf8ee11f86e" path="/var/lib/kubelet/pods/243f50b2-ef8e-48f4-805d-1bf8ee11f86e/volumes" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.756783 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8" path="/var/lib/kubelet/pods/b8ad2a9b-72d1-4aa3-ba64-47f7e2b241e8/volumes" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.758206 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67af9ab-17be-4f84-87c3-756ae574cc37-serving-cert\") pod \"controller-manager-84995b548b-tjsfp\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.758253 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-client-ca\") pod \"controller-manager-84995b548b-tjsfp\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.758303 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-config\") pod \"controller-manager-84995b548b-tjsfp\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.758369 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m65d9\" (UniqueName: \"kubernetes.io/projected/a67af9ab-17be-4f84-87c3-756ae574cc37-kube-api-access-m65d9\") pod \"controller-manager-84995b548b-tjsfp\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.758425 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-proxy-ca-bundles\") pod \"controller-manager-84995b548b-tjsfp\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.859469 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-proxy-ca-bundles\") pod \"controller-manager-84995b548b-tjsfp\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.859889 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8wvc\" (UniqueName: \"kubernetes.io/projected/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-kube-api-access-m8wvc\") pod \"route-controller-manager-f68b86755-kwjsm\" (UID: \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\") " pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.860000 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67af9ab-17be-4f84-87c3-756ae574cc37-serving-cert\") pod \"controller-manager-84995b548b-tjsfp\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.860091 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-client-ca\") pod \"controller-manager-84995b548b-tjsfp\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.860208 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-client-ca\") pod \"route-controller-manager-f68b86755-kwjsm\" (UID: \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\") " pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.860729 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-config\") pod \"controller-manager-84995b548b-tjsfp\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.860866 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-config\") pod \"route-controller-manager-f68b86755-kwjsm\" (UID: \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\") " pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.860994 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-serving-cert\") pod \"route-controller-manager-f68b86755-kwjsm\" (UID: \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\") " pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.861097 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m65d9\" (UniqueName: \"kubernetes.io/projected/a67af9ab-17be-4f84-87c3-756ae574cc37-kube-api-access-m65d9\") pod \"controller-manager-84995b548b-tjsfp\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.860574 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-proxy-ca-bundles\") pod \"controller-manager-84995b548b-tjsfp\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.860783 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-client-ca\") pod \"controller-manager-84995b548b-tjsfp\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.862196 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-config\") pod \"controller-manager-84995b548b-tjsfp\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.866454 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67af9ab-17be-4f84-87c3-756ae574cc37-serving-cert\") pod \"controller-manager-84995b548b-tjsfp\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.880994 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m65d9\" (UniqueName: \"kubernetes.io/projected/a67af9ab-17be-4f84-87c3-756ae574cc37-kube-api-access-m65d9\") pod \"controller-manager-84995b548b-tjsfp\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.962134 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-serving-cert\") pod \"route-controller-manager-f68b86755-kwjsm\" (UID: \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\") " pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.962631 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8wvc\" (UniqueName: \"kubernetes.io/projected/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-kube-api-access-m8wvc\") pod \"route-controller-manager-f68b86755-kwjsm\" (UID: \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\") " pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.962802 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-client-ca\") pod \"route-controller-manager-f68b86755-kwjsm\" (UID: \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\") " pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.963041 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-config\") pod \"route-controller-manager-f68b86755-kwjsm\" (UID: \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\") " pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.963638 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-client-ca\") pod \"route-controller-manager-f68b86755-kwjsm\" (UID: \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\") " pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.964177 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-config\") pod \"route-controller-manager-f68b86755-kwjsm\" (UID: \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\") " pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.965699 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-serving-cert\") pod \"route-controller-manager-f68b86755-kwjsm\" (UID: \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\") " pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:01:29 crc kubenswrapper[4707]: I1213 07:01:29.981172 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8wvc\" (UniqueName: \"kubernetes.io/projected/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-kube-api-access-m8wvc\") pod \"route-controller-manager-f68b86755-kwjsm\" (UID: \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\") " pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:01:30 crc kubenswrapper[4707]: I1213 07:01:30.026815 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:30 crc kubenswrapper[4707]: I1213 07:01:30.064936 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:01:30 crc kubenswrapper[4707]: I1213 07:01:30.374118 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm"] Dec 13 07:01:30 crc kubenswrapper[4707]: W1213 07:01:30.382929 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a00e08b_35dd_44ac_aa97_17ce27e1b6e5.slice/crio-f51ee9640bc0861998b43bbb706607260f6b0370299cfbae2afa6cb50cb175ab WatchSource:0}: Error finding container f51ee9640bc0861998b43bbb706607260f6b0370299cfbae2afa6cb50cb175ab: Status 404 returned error can't find the container with id f51ee9640bc0861998b43bbb706607260f6b0370299cfbae2afa6cb50cb175ab Dec 13 07:01:30 crc kubenswrapper[4707]: I1213 07:01:30.466851 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84995b548b-tjsfp"] Dec 13 07:01:30 crc kubenswrapper[4707]: W1213 07:01:30.472630 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda67af9ab_17be_4f84_87c3_756ae574cc37.slice/crio-f29f3d1b3ab9e28dfde0253d368fd87f46c76d9ac1787c6dea3d2bbf052ac127 WatchSource:0}: Error finding container f29f3d1b3ab9e28dfde0253d368fd87f46c76d9ac1787c6dea3d2bbf052ac127: Status 404 returned error can't find the container with id f29f3d1b3ab9e28dfde0253d368fd87f46c76d9ac1787c6dea3d2bbf052ac127 Dec 13 07:01:31 crc kubenswrapper[4707]: I1213 07:01:31.316558 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" event={"ID":"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5","Type":"ContainerStarted","Data":"5a0a73eb3be0944190e4b8790fda7ea5d34b97d9a3fcf8b324b7941633d895f1"} Dec 13 07:01:31 crc kubenswrapper[4707]: I1213 07:01:31.317848 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:01:31 crc kubenswrapper[4707]: I1213 07:01:31.317967 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" event={"ID":"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5","Type":"ContainerStarted","Data":"f51ee9640bc0861998b43bbb706607260f6b0370299cfbae2afa6cb50cb175ab"} Dec 13 07:01:31 crc kubenswrapper[4707]: I1213 07:01:31.319866 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" event={"ID":"a67af9ab-17be-4f84-87c3-756ae574cc37","Type":"ContainerStarted","Data":"357cb5f227ddab6d4d9ef292fef5874ffc17af6bfb6c874ce5422a71df71ade3"} Dec 13 07:01:31 crc kubenswrapper[4707]: I1213 07:01:31.319923 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" event={"ID":"a67af9ab-17be-4f84-87c3-756ae574cc37","Type":"ContainerStarted","Data":"f29f3d1b3ab9e28dfde0253d368fd87f46c76d9ac1787c6dea3d2bbf052ac127"} Dec 13 07:01:31 crc kubenswrapper[4707]: I1213 07:01:31.321545 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:31 crc kubenswrapper[4707]: I1213 07:01:31.323718 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:01:31 crc kubenswrapper[4707]: I1213 07:01:31.324991 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:01:31 crc kubenswrapper[4707]: I1213 07:01:31.338254 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" podStartSLOduration=3.338230969 podStartE2EDuration="3.338230969s" podCreationTimestamp="2025-12-13 07:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 07:01:31.336423431 +0000 UTC m=+399.833602478" watchObservedRunningTime="2025-12-13 07:01:31.338230969 +0000 UTC m=+399.835410016" Dec 13 07:01:31 crc kubenswrapper[4707]: I1213 07:01:31.364775 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" podStartSLOduration=3.364756055 podStartE2EDuration="3.364756055s" podCreationTimestamp="2025-12-13 07:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 07:01:31.35882038 +0000 UTC m=+399.855999447" watchObservedRunningTime="2025-12-13 07:01:31.364756055 +0000 UTC m=+399.861935102" Dec 13 07:01:53 crc kubenswrapper[4707]: I1213 07:01:53.348543 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 07:01:53 crc kubenswrapper[4707]: I1213 07:01:53.349085 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 07:02:08 crc kubenswrapper[4707]: I1213 07:02:08.596090 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84995b548b-tjsfp"] Dec 13 07:02:08 crc kubenswrapper[4707]: I1213 07:02:08.597107 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" podUID="a67af9ab-17be-4f84-87c3-756ae574cc37" containerName="controller-manager" containerID="cri-o://357cb5f227ddab6d4d9ef292fef5874ffc17af6bfb6c874ce5422a71df71ade3" gracePeriod=30 Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.488604 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.512647 4707 generic.go:334] "Generic (PLEG): container finished" podID="a67af9ab-17be-4f84-87c3-756ae574cc37" containerID="357cb5f227ddab6d4d9ef292fef5874ffc17af6bfb6c874ce5422a71df71ade3" exitCode=0 Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.512689 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" event={"ID":"a67af9ab-17be-4f84-87c3-756ae574cc37","Type":"ContainerDied","Data":"357cb5f227ddab6d4d9ef292fef5874ffc17af6bfb6c874ce5422a71df71ade3"} Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.512714 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" event={"ID":"a67af9ab-17be-4f84-87c3-756ae574cc37","Type":"ContainerDied","Data":"f29f3d1b3ab9e28dfde0253d368fd87f46c76d9ac1787c6dea3d2bbf052ac127"} Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.512731 4707 scope.go:117] "RemoveContainer" containerID="357cb5f227ddab6d4d9ef292fef5874ffc17af6bfb6c874ce5422a71df71ade3" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.512864 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84995b548b-tjsfp" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.529306 4707 scope.go:117] "RemoveContainer" containerID="357cb5f227ddab6d4d9ef292fef5874ffc17af6bfb6c874ce5422a71df71ade3" Dec 13 07:02:09 crc kubenswrapper[4707]: E1213 07:02:09.529904 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"357cb5f227ddab6d4d9ef292fef5874ffc17af6bfb6c874ce5422a71df71ade3\": container with ID starting with 357cb5f227ddab6d4d9ef292fef5874ffc17af6bfb6c874ce5422a71df71ade3 not found: ID does not exist" containerID="357cb5f227ddab6d4d9ef292fef5874ffc17af6bfb6c874ce5422a71df71ade3" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.529966 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"357cb5f227ddab6d4d9ef292fef5874ffc17af6bfb6c874ce5422a71df71ade3"} err="failed to get container status \"357cb5f227ddab6d4d9ef292fef5874ffc17af6bfb6c874ce5422a71df71ade3\": rpc error: code = NotFound desc = could not find container \"357cb5f227ddab6d4d9ef292fef5874ffc17af6bfb6c874ce5422a71df71ade3\": container with ID starting with 357cb5f227ddab6d4d9ef292fef5874ffc17af6bfb6c874ce5422a71df71ade3 not found: ID does not exist" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.639366 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67af9ab-17be-4f84-87c3-756ae574cc37-serving-cert\") pod \"a67af9ab-17be-4f84-87c3-756ae574cc37\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.639410 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-client-ca\") pod \"a67af9ab-17be-4f84-87c3-756ae574cc37\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.640279 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m65d9\" (UniqueName: \"kubernetes.io/projected/a67af9ab-17be-4f84-87c3-756ae574cc37-kube-api-access-m65d9\") pod \"a67af9ab-17be-4f84-87c3-756ae574cc37\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.640320 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-proxy-ca-bundles\") pod \"a67af9ab-17be-4f84-87c3-756ae574cc37\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.640337 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-config\") pod \"a67af9ab-17be-4f84-87c3-756ae574cc37\" (UID: \"a67af9ab-17be-4f84-87c3-756ae574cc37\") " Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.640883 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-client-ca" (OuterVolumeSpecName: "client-ca") pod "a67af9ab-17be-4f84-87c3-756ae574cc37" (UID: "a67af9ab-17be-4f84-87c3-756ae574cc37"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.641041 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-config" (OuterVolumeSpecName: "config") pod "a67af9ab-17be-4f84-87c3-756ae574cc37" (UID: "a67af9ab-17be-4f84-87c3-756ae574cc37"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.641277 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a67af9ab-17be-4f84-87c3-756ae574cc37" (UID: "a67af9ab-17be-4f84-87c3-756ae574cc37"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.645068 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67af9ab-17be-4f84-87c3-756ae574cc37-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a67af9ab-17be-4f84-87c3-756ae574cc37" (UID: "a67af9ab-17be-4f84-87c3-756ae574cc37"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.646696 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67af9ab-17be-4f84-87c3-756ae574cc37-kube-api-access-m65d9" (OuterVolumeSpecName: "kube-api-access-m65d9") pod "a67af9ab-17be-4f84-87c3-756ae574cc37" (UID: "a67af9ab-17be-4f84-87c3-756ae574cc37"). InnerVolumeSpecName "kube-api-access-m65d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.736360 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54b8cc7d68-65djb"] Dec 13 07:02:09 crc kubenswrapper[4707]: E1213 07:02:09.736566 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a67af9ab-17be-4f84-87c3-756ae574cc37" containerName="controller-manager" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.736579 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="a67af9ab-17be-4f84-87c3-756ae574cc37" containerName="controller-manager" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.736714 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67af9ab-17be-4f84-87c3-756ae574cc37" containerName="controller-manager" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.737113 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.741498 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m65d9\" (UniqueName: \"kubernetes.io/projected/a67af9ab-17be-4f84-87c3-756ae574cc37-kube-api-access-m65d9\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.741546 4707 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.741562 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-config\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.741574 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67af9ab-17be-4f84-87c3-756ae574cc37-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.741587 4707 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a67af9ab-17be-4f84-87c3-756ae574cc37-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.787441 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54b8cc7d68-65djb"] Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.828760 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84995b548b-tjsfp"] Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.832770 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-84995b548b-tjsfp"] Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.842522 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7kgl\" (UniqueName: \"kubernetes.io/projected/3b911c63-c0b6-44d6-93e2-4af03b55c554-kube-api-access-s7kgl\") pod \"controller-manager-54b8cc7d68-65djb\" (UID: \"3b911c63-c0b6-44d6-93e2-4af03b55c554\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.842576 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b911c63-c0b6-44d6-93e2-4af03b55c554-serving-cert\") pod \"controller-manager-54b8cc7d68-65djb\" (UID: \"3b911c63-c0b6-44d6-93e2-4af03b55c554\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.842603 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b911c63-c0b6-44d6-93e2-4af03b55c554-config\") pod \"controller-manager-54b8cc7d68-65djb\" (UID: \"3b911c63-c0b6-44d6-93e2-4af03b55c554\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.842631 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b911c63-c0b6-44d6-93e2-4af03b55c554-proxy-ca-bundles\") pod \"controller-manager-54b8cc7d68-65djb\" (UID: \"3b911c63-c0b6-44d6-93e2-4af03b55c554\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.842649 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b911c63-c0b6-44d6-93e2-4af03b55c554-client-ca\") pod \"controller-manager-54b8cc7d68-65djb\" (UID: \"3b911c63-c0b6-44d6-93e2-4af03b55c554\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.943465 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7kgl\" (UniqueName: \"kubernetes.io/projected/3b911c63-c0b6-44d6-93e2-4af03b55c554-kube-api-access-s7kgl\") pod \"controller-manager-54b8cc7d68-65djb\" (UID: \"3b911c63-c0b6-44d6-93e2-4af03b55c554\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.943515 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b911c63-c0b6-44d6-93e2-4af03b55c554-serving-cert\") pod \"controller-manager-54b8cc7d68-65djb\" (UID: \"3b911c63-c0b6-44d6-93e2-4af03b55c554\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.943536 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b911c63-c0b6-44d6-93e2-4af03b55c554-config\") pod \"controller-manager-54b8cc7d68-65djb\" (UID: \"3b911c63-c0b6-44d6-93e2-4af03b55c554\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.943564 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b911c63-c0b6-44d6-93e2-4af03b55c554-proxy-ca-bundles\") pod \"controller-manager-54b8cc7d68-65djb\" (UID: \"3b911c63-c0b6-44d6-93e2-4af03b55c554\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.943581 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b911c63-c0b6-44d6-93e2-4af03b55c554-client-ca\") pod \"controller-manager-54b8cc7d68-65djb\" (UID: \"3b911c63-c0b6-44d6-93e2-4af03b55c554\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.944527 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b911c63-c0b6-44d6-93e2-4af03b55c554-client-ca\") pod \"controller-manager-54b8cc7d68-65djb\" (UID: \"3b911c63-c0b6-44d6-93e2-4af03b55c554\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.945279 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b911c63-c0b6-44d6-93e2-4af03b55c554-config\") pod \"controller-manager-54b8cc7d68-65djb\" (UID: \"3b911c63-c0b6-44d6-93e2-4af03b55c554\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.945408 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b911c63-c0b6-44d6-93e2-4af03b55c554-proxy-ca-bundles\") pod \"controller-manager-54b8cc7d68-65djb\" (UID: \"3b911c63-c0b6-44d6-93e2-4af03b55c554\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.952741 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b911c63-c0b6-44d6-93e2-4af03b55c554-serving-cert\") pod \"controller-manager-54b8cc7d68-65djb\" (UID: \"3b911c63-c0b6-44d6-93e2-4af03b55c554\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:09 crc kubenswrapper[4707]: I1213 07:02:09.964508 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7kgl\" (UniqueName: \"kubernetes.io/projected/3b911c63-c0b6-44d6-93e2-4af03b55c554-kube-api-access-s7kgl\") pod \"controller-manager-54b8cc7d68-65djb\" (UID: \"3b911c63-c0b6-44d6-93e2-4af03b55c554\") " pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:10 crc kubenswrapper[4707]: I1213 07:02:10.062099 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:10 crc kubenswrapper[4707]: I1213 07:02:10.465875 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54b8cc7d68-65djb"] Dec 13 07:02:10 crc kubenswrapper[4707]: I1213 07:02:10.517834 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" event={"ID":"3b911c63-c0b6-44d6-93e2-4af03b55c554","Type":"ContainerStarted","Data":"7c8afa1bfd3b13595edb73b78ec6a82c06e62beef98eb6cff5bbde9575ee6dd2"} Dec 13 07:02:11 crc kubenswrapper[4707]: I1213 07:02:11.525788 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" event={"ID":"3b911c63-c0b6-44d6-93e2-4af03b55c554","Type":"ContainerStarted","Data":"ff3f74b738dfc6308f89bcca6f0229b1fd69c7891f1e4276223578de7c1fb85d"} Dec 13 07:02:11 crc kubenswrapper[4707]: I1213 07:02:11.526249 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:11 crc kubenswrapper[4707]: I1213 07:02:11.531040 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" Dec 13 07:02:11 crc kubenswrapper[4707]: I1213 07:02:11.545578 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54b8cc7d68-65djb" podStartSLOduration=3.545556067 podStartE2EDuration="3.545556067s" podCreationTimestamp="2025-12-13 07:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 07:02:11.544236453 +0000 UTC m=+440.041415530" watchObservedRunningTime="2025-12-13 07:02:11.545556067 +0000 UTC m=+440.042735114" Dec 13 07:02:11 crc kubenswrapper[4707]: I1213 07:02:11.751599 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a67af9ab-17be-4f84-87c3-756ae574cc37" path="/var/lib/kubelet/pods/a67af9ab-17be-4f84-87c3-756ae574cc37/volumes" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.558154 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5h448"] Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.558813 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.578294 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5h448"] Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.584164 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.584215 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.584237 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-registry-tls\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.584259 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-bound-sa-token\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.584295 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.584328 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-registry-certificates\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.584351 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22ljp\" (UniqueName: \"kubernetes.io/projected/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-kube-api-access-22ljp\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.584383 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-trusted-ca\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.642180 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.685818 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-bound-sa-token\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.685888 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.685918 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-registry-certificates\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.685934 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22ljp\" (UniqueName: \"kubernetes.io/projected/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-kube-api-access-22ljp\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.685984 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-trusted-ca\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.686031 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.686052 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-registry-tls\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.686924 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.688073 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-trusted-ca\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.688299 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-registry-certificates\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.694915 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-registry-tls\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.699804 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.704382 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-bound-sa-token\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.720158 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22ljp\" (UniqueName: \"kubernetes.io/projected/5754c101-6ef4-4a24-8bb2-a8f5927b9bff-kube-api-access-22ljp\") pod \"image-registry-66df7c8f76-5h448\" (UID: \"5754c101-6ef4-4a24-8bb2-a8f5927b9bff\") " pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:12 crc kubenswrapper[4707]: I1213 07:02:12.880076 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:13 crc kubenswrapper[4707]: I1213 07:02:13.346240 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5h448"] Dec 13 07:02:13 crc kubenswrapper[4707]: W1213 07:02:13.350847 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5754c101_6ef4_4a24_8bb2_a8f5927b9bff.slice/crio-0e22f6c50dae4d670d55dadf54b1d7735d85711ef188a1c0ac6f54c606cfdd79 WatchSource:0}: Error finding container 0e22f6c50dae4d670d55dadf54b1d7735d85711ef188a1c0ac6f54c606cfdd79: Status 404 returned error can't find the container with id 0e22f6c50dae4d670d55dadf54b1d7735d85711ef188a1c0ac6f54c606cfdd79 Dec 13 07:02:13 crc kubenswrapper[4707]: I1213 07:02:13.536267 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5h448" event={"ID":"5754c101-6ef4-4a24-8bb2-a8f5927b9bff","Type":"ContainerStarted","Data":"0e22f6c50dae4d670d55dadf54b1d7735d85711ef188a1c0ac6f54c606cfdd79"} Dec 13 07:02:14 crc kubenswrapper[4707]: I1213 07:02:14.541833 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5h448" event={"ID":"5754c101-6ef4-4a24-8bb2-a8f5927b9bff","Type":"ContainerStarted","Data":"764a576ed8e1b2de59f9c533b4ea83f4ec76121157da5a29e1c29d2d6859ccd2"} Dec 13 07:02:14 crc kubenswrapper[4707]: I1213 07:02:14.542159 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:14 crc kubenswrapper[4707]: I1213 07:02:14.564643 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-5h448" podStartSLOduration=2.5646242089999998 podStartE2EDuration="2.564624209s" podCreationTimestamp="2025-12-13 07:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 07:02:14.561767814 +0000 UTC m=+443.058946891" watchObservedRunningTime="2025-12-13 07:02:14.564624209 +0000 UTC m=+443.061803256" Dec 13 07:02:23 crc kubenswrapper[4707]: I1213 07:02:23.348370 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 07:02:23 crc kubenswrapper[4707]: I1213 07:02:23.348756 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 07:02:28 crc kubenswrapper[4707]: I1213 07:02:28.581162 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm"] Dec 13 07:02:28 crc kubenswrapper[4707]: I1213 07:02:28.581970 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" podUID="0a00e08b-35dd-44ac-aa97-17ce27e1b6e5" containerName="route-controller-manager" containerID="cri-o://5a0a73eb3be0944190e4b8790fda7ea5d34b97d9a3fcf8b324b7941633d895f1" gracePeriod=30 Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.557553 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.626825 4707 generic.go:334] "Generic (PLEG): container finished" podID="0a00e08b-35dd-44ac-aa97-17ce27e1b6e5" containerID="5a0a73eb3be0944190e4b8790fda7ea5d34b97d9a3fcf8b324b7941633d895f1" exitCode=0 Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.626870 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" event={"ID":"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5","Type":"ContainerDied","Data":"5a0a73eb3be0944190e4b8790fda7ea5d34b97d9a3fcf8b324b7941633d895f1"} Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.626887 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.626917 4707 scope.go:117] "RemoveContainer" containerID="5a0a73eb3be0944190e4b8790fda7ea5d34b97d9a3fcf8b324b7941633d895f1" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.626904 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm" event={"ID":"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5","Type":"ContainerDied","Data":"f51ee9640bc0861998b43bbb706607260f6b0370299cfbae2afa6cb50cb175ab"} Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.642381 4707 scope.go:117] "RemoveContainer" containerID="5a0a73eb3be0944190e4b8790fda7ea5d34b97d9a3fcf8b324b7941633d895f1" Dec 13 07:02:29 crc kubenswrapper[4707]: E1213 07:02:29.643150 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a0a73eb3be0944190e4b8790fda7ea5d34b97d9a3fcf8b324b7941633d895f1\": container with ID starting with 5a0a73eb3be0944190e4b8790fda7ea5d34b97d9a3fcf8b324b7941633d895f1 not found: ID does not exist" containerID="5a0a73eb3be0944190e4b8790fda7ea5d34b97d9a3fcf8b324b7941633d895f1" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.643219 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0a73eb3be0944190e4b8790fda7ea5d34b97d9a3fcf8b324b7941633d895f1"} err="failed to get container status \"5a0a73eb3be0944190e4b8790fda7ea5d34b97d9a3fcf8b324b7941633d895f1\": rpc error: code = NotFound desc = could not find container \"5a0a73eb3be0944190e4b8790fda7ea5d34b97d9a3fcf8b324b7941633d895f1\": container with ID starting with 5a0a73eb3be0944190e4b8790fda7ea5d34b97d9a3fcf8b324b7941633d895f1 not found: ID does not exist" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.666855 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-config\") pod \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\" (UID: \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\") " Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.666925 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-client-ca\") pod \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\" (UID: \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\") " Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.667016 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8wvc\" (UniqueName: \"kubernetes.io/projected/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-kube-api-access-m8wvc\") pod \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\" (UID: \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\") " Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.667044 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-serving-cert\") pod \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\" (UID: \"0a00e08b-35dd-44ac-aa97-17ce27e1b6e5\") " Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.667778 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-client-ca" (OuterVolumeSpecName: "client-ca") pod "0a00e08b-35dd-44ac-aa97-17ce27e1b6e5" (UID: "0a00e08b-35dd-44ac-aa97-17ce27e1b6e5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.667847 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-config" (OuterVolumeSpecName: "config") pod "0a00e08b-35dd-44ac-aa97-17ce27e1b6e5" (UID: "0a00e08b-35dd-44ac-aa97-17ce27e1b6e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.671577 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0a00e08b-35dd-44ac-aa97-17ce27e1b6e5" (UID: "0a00e08b-35dd-44ac-aa97-17ce27e1b6e5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.677783 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-kube-api-access-m8wvc" (OuterVolumeSpecName: "kube-api-access-m8wvc") pod "0a00e08b-35dd-44ac-aa97-17ce27e1b6e5" (UID: "0a00e08b-35dd-44ac-aa97-17ce27e1b6e5"). InnerVolumeSpecName "kube-api-access-m8wvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.759361 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs"] Dec 13 07:02:29 crc kubenswrapper[4707]: E1213 07:02:29.759579 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a00e08b-35dd-44ac-aa97-17ce27e1b6e5" containerName="route-controller-manager" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.759595 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a00e08b-35dd-44ac-aa97-17ce27e1b6e5" containerName="route-controller-manager" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.759727 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a00e08b-35dd-44ac-aa97-17ce27e1b6e5" containerName="route-controller-manager" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.760352 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.764237 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs"] Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.768018 4707 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-config\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.768042 4707 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-client-ca\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.768051 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8wvc\" (UniqueName: \"kubernetes.io/projected/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-kube-api-access-m8wvc\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.768060 4707 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.868850 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj2n9\" (UniqueName: \"kubernetes.io/projected/4f4e98e4-e3f8-4971-bcdf-ed125dc661ea-kube-api-access-hj2n9\") pod \"route-controller-manager-5f7f8769d4-pq2qs\" (UID: \"4f4e98e4-e3f8-4971-bcdf-ed125dc661ea\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.868916 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f4e98e4-e3f8-4971-bcdf-ed125dc661ea-client-ca\") pod \"route-controller-manager-5f7f8769d4-pq2qs\" (UID: \"4f4e98e4-e3f8-4971-bcdf-ed125dc661ea\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.868942 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f4e98e4-e3f8-4971-bcdf-ed125dc661ea-config\") pod \"route-controller-manager-5f7f8769d4-pq2qs\" (UID: \"4f4e98e4-e3f8-4971-bcdf-ed125dc661ea\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.868990 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f4e98e4-e3f8-4971-bcdf-ed125dc661ea-serving-cert\") pod \"route-controller-manager-5f7f8769d4-pq2qs\" (UID: \"4f4e98e4-e3f8-4971-bcdf-ed125dc661ea\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.946274 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm"] Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.949473 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f68b86755-kwjsm"] Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.970658 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj2n9\" (UniqueName: \"kubernetes.io/projected/4f4e98e4-e3f8-4971-bcdf-ed125dc661ea-kube-api-access-hj2n9\") pod \"route-controller-manager-5f7f8769d4-pq2qs\" (UID: \"4f4e98e4-e3f8-4971-bcdf-ed125dc661ea\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.971021 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f4e98e4-e3f8-4971-bcdf-ed125dc661ea-client-ca\") pod \"route-controller-manager-5f7f8769d4-pq2qs\" (UID: \"4f4e98e4-e3f8-4971-bcdf-ed125dc661ea\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.971209 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f4e98e4-e3f8-4971-bcdf-ed125dc661ea-config\") pod \"route-controller-manager-5f7f8769d4-pq2qs\" (UID: \"4f4e98e4-e3f8-4971-bcdf-ed125dc661ea\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.971314 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f4e98e4-e3f8-4971-bcdf-ed125dc661ea-serving-cert\") pod \"route-controller-manager-5f7f8769d4-pq2qs\" (UID: \"4f4e98e4-e3f8-4971-bcdf-ed125dc661ea\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.972185 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f4e98e4-e3f8-4971-bcdf-ed125dc661ea-client-ca\") pod \"route-controller-manager-5f7f8769d4-pq2qs\" (UID: \"4f4e98e4-e3f8-4971-bcdf-ed125dc661ea\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.972543 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f4e98e4-e3f8-4971-bcdf-ed125dc661ea-config\") pod \"route-controller-manager-5f7f8769d4-pq2qs\" (UID: \"4f4e98e4-e3f8-4971-bcdf-ed125dc661ea\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" Dec 13 07:02:29 crc kubenswrapper[4707]: I1213 07:02:29.983029 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f4e98e4-e3f8-4971-bcdf-ed125dc661ea-serving-cert\") pod \"route-controller-manager-5f7f8769d4-pq2qs\" (UID: \"4f4e98e4-e3f8-4971-bcdf-ed125dc661ea\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" Dec 13 07:02:30 crc kubenswrapper[4707]: I1213 07:02:30.006790 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj2n9\" (UniqueName: \"kubernetes.io/projected/4f4e98e4-e3f8-4971-bcdf-ed125dc661ea-kube-api-access-hj2n9\") pod \"route-controller-manager-5f7f8769d4-pq2qs\" (UID: \"4f4e98e4-e3f8-4971-bcdf-ed125dc661ea\") " pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" Dec 13 07:02:30 crc kubenswrapper[4707]: I1213 07:02:30.086507 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" Dec 13 07:02:30 crc kubenswrapper[4707]: I1213 07:02:30.488806 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs"] Dec 13 07:02:30 crc kubenswrapper[4707]: W1213 07:02:30.492334 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f4e98e4_e3f8_4971_bcdf_ed125dc661ea.slice/crio-8b6533527045e22c190bba52de99dceb82c4cefebad5c41311606e82271083b3 WatchSource:0}: Error finding container 8b6533527045e22c190bba52de99dceb82c4cefebad5c41311606e82271083b3: Status 404 returned error can't find the container with id 8b6533527045e22c190bba52de99dceb82c4cefebad5c41311606e82271083b3 Dec 13 07:02:30 crc kubenswrapper[4707]: I1213 07:02:30.638147 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" event={"ID":"4f4e98e4-e3f8-4971-bcdf-ed125dc661ea","Type":"ContainerStarted","Data":"27fdb4eceafbbccdabf66f2c4d46584e48e053844c52b25284939d5cfc098aca"} Dec 13 07:02:30 crc kubenswrapper[4707]: I1213 07:02:30.638448 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" event={"ID":"4f4e98e4-e3f8-4971-bcdf-ed125dc661ea","Type":"ContainerStarted","Data":"8b6533527045e22c190bba52de99dceb82c4cefebad5c41311606e82271083b3"} Dec 13 07:02:30 crc kubenswrapper[4707]: I1213 07:02:30.638566 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" Dec 13 07:02:30 crc kubenswrapper[4707]: I1213 07:02:30.639813 4707 patch_prober.go:28] interesting pod/route-controller-manager-5f7f8769d4-pq2qs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" start-of-body= Dec 13 07:02:30 crc kubenswrapper[4707]: I1213 07:02:30.639869 4707 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" podUID="4f4e98e4-e3f8-4971-bcdf-ed125dc661ea" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.71:8443/healthz\": dial tcp 10.217.0.71:8443: connect: connection refused" Dec 13 07:02:30 crc kubenswrapper[4707]: I1213 07:02:30.664281 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" podStartSLOduration=2.664258667 podStartE2EDuration="2.664258667s" podCreationTimestamp="2025-12-13 07:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 07:02:30.659382389 +0000 UTC m=+459.156561436" watchObservedRunningTime="2025-12-13 07:02:30.664258667 +0000 UTC m=+459.161437714" Dec 13 07:02:31 crc kubenswrapper[4707]: I1213 07:02:31.650119 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f7f8769d4-pq2qs" Dec 13 07:02:31 crc kubenswrapper[4707]: I1213 07:02:31.752713 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a00e08b-35dd-44ac-aa97-17ce27e1b6e5" path="/var/lib/kubelet/pods/0a00e08b-35dd-44ac-aa97-17ce27e1b6e5/volumes" Dec 13 07:02:32 crc kubenswrapper[4707]: I1213 07:02:32.885483 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-5h448" Dec 13 07:02:32 crc kubenswrapper[4707]: I1213 07:02:32.934183 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-995mf"] Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.063093 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z47zb"] Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.063887 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z47zb" podUID="1c15410b-e411-44d7-8663-d19dfc2a76b6" containerName="registry-server" containerID="cri-o://31385c6b5afaf3726b94fef5381c862e94150b8441e088e2ae505ac0ab02f73c" gracePeriod=30 Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.079288 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gkw57"] Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.079518 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gkw57" podUID="bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" containerName="registry-server" containerID="cri-o://9822868977e799d309d5a03935630365c95ae8b2d7d3dcc70afd941b3bed2207" gracePeriod=30 Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.091554 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mglrx"] Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.091890 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" podUID="11795d95-2c15-4a44-9276-6786ec1e5cc1" containerName="marketplace-operator" containerID="cri-o://ddeb4a3ad18ae5ba6a8ed98d5faf61866a9b41856737f7d7df46bee99abcf9f2" gracePeriod=30 Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.112351 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-75wd8"] Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.112591 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-75wd8" podUID="fb1a1a41-b86e-4de7-af57-7862986b3037" containerName="registry-server" containerID="cri-o://c0981413270f046893c18cc80ebbf9c9ead57f894dbb47761c367f2b49727538" gracePeriod=30 Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.120384 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2t9v8"] Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.120614 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2t9v8" podUID="6f152c64-57be-4ac0-8422-d5d504f02cf8" containerName="registry-server" containerID="cri-o://1d73b24189b933fc66144a3b1daca27fbbe37c82919178579150b2e861e18ed8" gracePeriod=30 Dec 13 07:02:45 crc kubenswrapper[4707]: E1213 07:02:45.130693 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 31385c6b5afaf3726b94fef5381c862e94150b8441e088e2ae505ac0ab02f73c is running failed: container process not found" containerID="31385c6b5afaf3726b94fef5381c862e94150b8441e088e2ae505ac0ab02f73c" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 07:02:45 crc kubenswrapper[4707]: E1213 07:02:45.134350 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 31385c6b5afaf3726b94fef5381c862e94150b8441e088e2ae505ac0ab02f73c is running failed: container process not found" containerID="31385c6b5afaf3726b94fef5381c862e94150b8441e088e2ae505ac0ab02f73c" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 07:02:45 crc kubenswrapper[4707]: E1213 07:02:45.134945 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 31385c6b5afaf3726b94fef5381c862e94150b8441e088e2ae505ac0ab02f73c is running failed: container process not found" containerID="31385c6b5afaf3726b94fef5381c862e94150b8441e088e2ae505ac0ab02f73c" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 07:02:45 crc kubenswrapper[4707]: E1213 07:02:45.134996 4707 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 31385c6b5afaf3726b94fef5381c862e94150b8441e088e2ae505ac0ab02f73c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-z47zb" podUID="1c15410b-e411-44d7-8663-d19dfc2a76b6" containerName="registry-server" Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.140461 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-99vtt"] Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.141272 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-99vtt" Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.155803 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-99vtt"] Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.194935 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwlwz\" (UniqueName: \"kubernetes.io/projected/3d15d15c-6c4f-4e32-bbaf-caebdb10506c-kube-api-access-dwlwz\") pod \"marketplace-operator-79b997595-99vtt\" (UID: \"3d15d15c-6c4f-4e32-bbaf-caebdb10506c\") " pod="openshift-marketplace/marketplace-operator-79b997595-99vtt" Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.195041 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d15d15c-6c4f-4e32-bbaf-caebdb10506c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-99vtt\" (UID: \"3d15d15c-6c4f-4e32-bbaf-caebdb10506c\") " pod="openshift-marketplace/marketplace-operator-79b997595-99vtt" Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.195133 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3d15d15c-6c4f-4e32-bbaf-caebdb10506c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-99vtt\" (UID: \"3d15d15c-6c4f-4e32-bbaf-caebdb10506c\") " pod="openshift-marketplace/marketplace-operator-79b997595-99vtt" Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.296022 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwlwz\" (UniqueName: \"kubernetes.io/projected/3d15d15c-6c4f-4e32-bbaf-caebdb10506c-kube-api-access-dwlwz\") pod \"marketplace-operator-79b997595-99vtt\" (UID: \"3d15d15c-6c4f-4e32-bbaf-caebdb10506c\") " pod="openshift-marketplace/marketplace-operator-79b997595-99vtt" Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.296118 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d15d15c-6c4f-4e32-bbaf-caebdb10506c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-99vtt\" (UID: \"3d15d15c-6c4f-4e32-bbaf-caebdb10506c\") " pod="openshift-marketplace/marketplace-operator-79b997595-99vtt" Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.297294 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d15d15c-6c4f-4e32-bbaf-caebdb10506c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-99vtt\" (UID: \"3d15d15c-6c4f-4e32-bbaf-caebdb10506c\") " pod="openshift-marketplace/marketplace-operator-79b997595-99vtt" Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.297359 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3d15d15c-6c4f-4e32-bbaf-caebdb10506c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-99vtt\" (UID: \"3d15d15c-6c4f-4e32-bbaf-caebdb10506c\") " pod="openshift-marketplace/marketplace-operator-79b997595-99vtt" Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.304447 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3d15d15c-6c4f-4e32-bbaf-caebdb10506c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-99vtt\" (UID: \"3d15d15c-6c4f-4e32-bbaf-caebdb10506c\") " pod="openshift-marketplace/marketplace-operator-79b997595-99vtt" Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.311609 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwlwz\" (UniqueName: \"kubernetes.io/projected/3d15d15c-6c4f-4e32-bbaf-caebdb10506c-kube-api-access-dwlwz\") pod \"marketplace-operator-79b997595-99vtt\" (UID: \"3d15d15c-6c4f-4e32-bbaf-caebdb10506c\") " pod="openshift-marketplace/marketplace-operator-79b997595-99vtt" Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.462908 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-99vtt" Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.714054 4707 generic.go:334] "Generic (PLEG): container finished" podID="fb1a1a41-b86e-4de7-af57-7862986b3037" containerID="c0981413270f046893c18cc80ebbf9c9ead57f894dbb47761c367f2b49727538" exitCode=0 Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.714078 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75wd8" event={"ID":"fb1a1a41-b86e-4de7-af57-7862986b3037","Type":"ContainerDied","Data":"c0981413270f046893c18cc80ebbf9c9ead57f894dbb47761c367f2b49727538"} Dec 13 07:02:45 crc kubenswrapper[4707]: I1213 07:02:45.857675 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-99vtt"] Dec 13 07:02:46 crc kubenswrapper[4707]: E1213 07:02:46.613545 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c0981413270f046893c18cc80ebbf9c9ead57f894dbb47761c367f2b49727538 is running failed: container process not found" containerID="c0981413270f046893c18cc80ebbf9c9ead57f894dbb47761c367f2b49727538" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 07:02:46 crc kubenswrapper[4707]: E1213 07:02:46.614068 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c0981413270f046893c18cc80ebbf9c9ead57f894dbb47761c367f2b49727538 is running failed: container process not found" containerID="c0981413270f046893c18cc80ebbf9c9ead57f894dbb47761c367f2b49727538" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 07:02:46 crc kubenswrapper[4707]: E1213 07:02:46.614494 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c0981413270f046893c18cc80ebbf9c9ead57f894dbb47761c367f2b49727538 is running failed: container process not found" containerID="c0981413270f046893c18cc80ebbf9c9ead57f894dbb47761c367f2b49727538" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 07:02:46 crc kubenswrapper[4707]: E1213 07:02:46.614524 4707 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c0981413270f046893c18cc80ebbf9c9ead57f894dbb47761c367f2b49727538 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-75wd8" podUID="fb1a1a41-b86e-4de7-af57-7862986b3037" containerName="registry-server" Dec 13 07:02:46 crc kubenswrapper[4707]: I1213 07:02:46.720831 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-99vtt" event={"ID":"3d15d15c-6c4f-4e32-bbaf-caebdb10506c","Type":"ContainerStarted","Data":"aa7f7d44a10c47a258c1c1a94afed1cc9dbac8218f212eb8da68cca5122f7c01"} Dec 13 07:02:46 crc kubenswrapper[4707]: I1213 07:02:46.722944 4707 generic.go:334] "Generic (PLEG): container finished" podID="6f152c64-57be-4ac0-8422-d5d504f02cf8" containerID="1d73b24189b933fc66144a3b1daca27fbbe37c82919178579150b2e861e18ed8" exitCode=0 Dec 13 07:02:46 crc kubenswrapper[4707]: I1213 07:02:46.723024 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2t9v8" event={"ID":"6f152c64-57be-4ac0-8422-d5d504f02cf8","Type":"ContainerDied","Data":"1d73b24189b933fc66144a3b1daca27fbbe37c82919178579150b2e861e18ed8"} Dec 13 07:02:46 crc kubenswrapper[4707]: I1213 07:02:46.725014 4707 generic.go:334] "Generic (PLEG): container finished" podID="1c15410b-e411-44d7-8663-d19dfc2a76b6" containerID="31385c6b5afaf3726b94fef5381c862e94150b8441e088e2ae505ac0ab02f73c" exitCode=0 Dec 13 07:02:46 crc kubenswrapper[4707]: I1213 07:02:46.725102 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z47zb" event={"ID":"1c15410b-e411-44d7-8663-d19dfc2a76b6","Type":"ContainerDied","Data":"31385c6b5afaf3726b94fef5381c862e94150b8441e088e2ae505ac0ab02f73c"} Dec 13 07:02:46 crc kubenswrapper[4707]: I1213 07:02:46.726744 4707 generic.go:334] "Generic (PLEG): container finished" podID="11795d95-2c15-4a44-9276-6786ec1e5cc1" containerID="ddeb4a3ad18ae5ba6a8ed98d5faf61866a9b41856737f7d7df46bee99abcf9f2" exitCode=0 Dec 13 07:02:46 crc kubenswrapper[4707]: I1213 07:02:46.726817 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" event={"ID":"11795d95-2c15-4a44-9276-6786ec1e5cc1","Type":"ContainerDied","Data":"ddeb4a3ad18ae5ba6a8ed98d5faf61866a9b41856737f7d7df46bee99abcf9f2"} Dec 13 07:02:46 crc kubenswrapper[4707]: I1213 07:02:46.726872 4707 scope.go:117] "RemoveContainer" containerID="dfae44c57b3f058cf596f099f0abe96a8830d565514b49373bab2aa895e17c56" Dec 13 07:02:46 crc kubenswrapper[4707]: I1213 07:02:46.730694 4707 generic.go:334] "Generic (PLEG): container finished" podID="bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" containerID="9822868977e799d309d5a03935630365c95ae8b2d7d3dcc70afd941b3bed2207" exitCode=0 Dec 13 07:02:46 crc kubenswrapper[4707]: I1213 07:02:46.730727 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkw57" event={"ID":"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64","Type":"ContainerDied","Data":"9822868977e799d309d5a03935630365c95ae8b2d7d3dcc70afd941b3bed2207"} Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.301446 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.302183 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.356914 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/11795d95-2c15-4a44-9276-6786ec1e5cc1-marketplace-operator-metrics\") pod \"11795d95-2c15-4a44-9276-6786ec1e5cc1\" (UID: \"11795d95-2c15-4a44-9276-6786ec1e5cc1\") " Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.357064 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmhrn\" (UniqueName: \"kubernetes.io/projected/fb1a1a41-b86e-4de7-af57-7862986b3037-kube-api-access-lmhrn\") pod \"fb1a1a41-b86e-4de7-af57-7862986b3037\" (UID: \"fb1a1a41-b86e-4de7-af57-7862986b3037\") " Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.357097 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb1a1a41-b86e-4de7-af57-7862986b3037-utilities\") pod \"fb1a1a41-b86e-4de7-af57-7862986b3037\" (UID: \"fb1a1a41-b86e-4de7-af57-7862986b3037\") " Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.357125 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb1a1a41-b86e-4de7-af57-7862986b3037-catalog-content\") pod \"fb1a1a41-b86e-4de7-af57-7862986b3037\" (UID: \"fb1a1a41-b86e-4de7-af57-7862986b3037\") " Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.357157 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h84wn\" (UniqueName: \"kubernetes.io/projected/11795d95-2c15-4a44-9276-6786ec1e5cc1-kube-api-access-h84wn\") pod \"11795d95-2c15-4a44-9276-6786ec1e5cc1\" (UID: \"11795d95-2c15-4a44-9276-6786ec1e5cc1\") " Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.357183 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/11795d95-2c15-4a44-9276-6786ec1e5cc1-marketplace-trusted-ca\") pod \"11795d95-2c15-4a44-9276-6786ec1e5cc1\" (UID: \"11795d95-2c15-4a44-9276-6786ec1e5cc1\") " Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.360285 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11795d95-2c15-4a44-9276-6786ec1e5cc1-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "11795d95-2c15-4a44-9276-6786ec1e5cc1" (UID: "11795d95-2c15-4a44-9276-6786ec1e5cc1"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.361731 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb1a1a41-b86e-4de7-af57-7862986b3037-utilities" (OuterVolumeSpecName: "utilities") pod "fb1a1a41-b86e-4de7-af57-7862986b3037" (UID: "fb1a1a41-b86e-4de7-af57-7862986b3037"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.369272 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb1a1a41-b86e-4de7-af57-7862986b3037-kube-api-access-lmhrn" (OuterVolumeSpecName: "kube-api-access-lmhrn") pod "fb1a1a41-b86e-4de7-af57-7862986b3037" (UID: "fb1a1a41-b86e-4de7-af57-7862986b3037"). InnerVolumeSpecName "kube-api-access-lmhrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.371443 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11795d95-2c15-4a44-9276-6786ec1e5cc1-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "11795d95-2c15-4a44-9276-6786ec1e5cc1" (UID: "11795d95-2c15-4a44-9276-6786ec1e5cc1"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.374668 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11795d95-2c15-4a44-9276-6786ec1e5cc1-kube-api-access-h84wn" (OuterVolumeSpecName: "kube-api-access-h84wn") pod "11795d95-2c15-4a44-9276-6786ec1e5cc1" (UID: "11795d95-2c15-4a44-9276-6786ec1e5cc1"). InnerVolumeSpecName "kube-api-access-h84wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.383762 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z47zb" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.398932 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb1a1a41-b86e-4de7-af57-7862986b3037-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb1a1a41-b86e-4de7-af57-7862986b3037" (UID: "fb1a1a41-b86e-4de7-af57-7862986b3037"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.458292 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2cbk\" (UniqueName: \"kubernetes.io/projected/1c15410b-e411-44d7-8663-d19dfc2a76b6-kube-api-access-c2cbk\") pod \"1c15410b-e411-44d7-8663-d19dfc2a76b6\" (UID: \"1c15410b-e411-44d7-8663-d19dfc2a76b6\") " Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.458381 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c15410b-e411-44d7-8663-d19dfc2a76b6-utilities\") pod \"1c15410b-e411-44d7-8663-d19dfc2a76b6\" (UID: \"1c15410b-e411-44d7-8663-d19dfc2a76b6\") " Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.458416 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c15410b-e411-44d7-8663-d19dfc2a76b6-catalog-content\") pod \"1c15410b-e411-44d7-8663-d19dfc2a76b6\" (UID: \"1c15410b-e411-44d7-8663-d19dfc2a76b6\") " Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.458746 4707 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/11795d95-2c15-4a44-9276-6786ec1e5cc1-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.458761 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmhrn\" (UniqueName: \"kubernetes.io/projected/fb1a1a41-b86e-4de7-af57-7862986b3037-kube-api-access-lmhrn\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.458771 4707 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb1a1a41-b86e-4de7-af57-7862986b3037-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.458779 4707 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb1a1a41-b86e-4de7-af57-7862986b3037-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.458789 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h84wn\" (UniqueName: \"kubernetes.io/projected/11795d95-2c15-4a44-9276-6786ec1e5cc1-kube-api-access-h84wn\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.458801 4707 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/11795d95-2c15-4a44-9276-6786ec1e5cc1-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.460070 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c15410b-e411-44d7-8663-d19dfc2a76b6-utilities" (OuterVolumeSpecName: "utilities") pod "1c15410b-e411-44d7-8663-d19dfc2a76b6" (UID: "1c15410b-e411-44d7-8663-d19dfc2a76b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.462593 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c15410b-e411-44d7-8663-d19dfc2a76b6-kube-api-access-c2cbk" (OuterVolumeSpecName: "kube-api-access-c2cbk") pod "1c15410b-e411-44d7-8663-d19dfc2a76b6" (UID: "1c15410b-e411-44d7-8663-d19dfc2a76b6"). InnerVolumeSpecName "kube-api-access-c2cbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.522416 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c15410b-e411-44d7-8663-d19dfc2a76b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c15410b-e411-44d7-8663-d19dfc2a76b6" (UID: "1c15410b-e411-44d7-8663-d19dfc2a76b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.560260 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2cbk\" (UniqueName: \"kubernetes.io/projected/1c15410b-e411-44d7-8663-d19dfc2a76b6-kube-api-access-c2cbk\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.560326 4707 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c15410b-e411-44d7-8663-d19dfc2a76b6-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.560340 4707 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c15410b-e411-44d7-8663-d19dfc2a76b6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:48 crc kubenswrapper[4707]: E1213 07:02:47.581033 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1d73b24189b933fc66144a3b1daca27fbbe37c82919178579150b2e861e18ed8 is running failed: container process not found" containerID="1d73b24189b933fc66144a3b1daca27fbbe37c82919178579150b2e861e18ed8" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 07:02:48 crc kubenswrapper[4707]: E1213 07:02:47.581663 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1d73b24189b933fc66144a3b1daca27fbbe37c82919178579150b2e861e18ed8 is running failed: container process not found" containerID="1d73b24189b933fc66144a3b1daca27fbbe37c82919178579150b2e861e18ed8" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 07:02:48 crc kubenswrapper[4707]: E1213 07:02:47.582153 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1d73b24189b933fc66144a3b1daca27fbbe37c82919178579150b2e861e18ed8 is running failed: container process not found" containerID="1d73b24189b933fc66144a3b1daca27fbbe37c82919178579150b2e861e18ed8" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 07:02:48 crc kubenswrapper[4707]: E1213 07:02:47.582237 4707 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1d73b24189b933fc66144a3b1daca27fbbe37c82919178579150b2e861e18ed8 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-2t9v8" podUID="6f152c64-57be-4ac0-8422-d5d504f02cf8" containerName="registry-server" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.736923 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.736925 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mglrx" event={"ID":"11795d95-2c15-4a44-9276-6786ec1e5cc1","Type":"ContainerDied","Data":"0dca281a318168efe7fe774b77ca2f4a8bb60f9777a87c269e8c9f80c809aafb"} Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.737058 4707 scope.go:117] "RemoveContainer" containerID="ddeb4a3ad18ae5ba6a8ed98d5faf61866a9b41856737f7d7df46bee99abcf9f2" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.741111 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75wd8" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.741219 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75wd8" event={"ID":"fb1a1a41-b86e-4de7-af57-7862986b3037","Type":"ContainerDied","Data":"b97964b51cc49b18e4bcf3a69d49d59d1a608b9661f9fc032e083f5e969f6b10"} Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.751040 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z47zb" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.753213 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z47zb" event={"ID":"1c15410b-e411-44d7-8663-d19dfc2a76b6","Type":"ContainerDied","Data":"757a63c1cf672cd88ce0a4488670ba5de60014b2a7f1d906f8dbba9ea2150a69"} Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.755890 4707 scope.go:117] "RemoveContainer" containerID="c0981413270f046893c18cc80ebbf9c9ead57f894dbb47761c367f2b49727538" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.756465 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-99vtt" event={"ID":"3d15d15c-6c4f-4e32-bbaf-caebdb10506c","Type":"ContainerStarted","Data":"5d50a65d058d3d7d963367241ee4614b91df2264db1d6278953b50ae399f19f6"} Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.757100 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-99vtt" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.770267 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-99vtt" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.777059 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-99vtt" podStartSLOduration=2.777039476 podStartE2EDuration="2.777039476s" podCreationTimestamp="2025-12-13 07:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 07:02:47.774260704 +0000 UTC m=+476.271439761" watchObservedRunningTime="2025-12-13 07:02:47.777039476 +0000 UTC m=+476.274218523" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.793921 4707 scope.go:117] "RemoveContainer" containerID="c4af05592af84e828bda5bb4f83e16446f08f45301ec7707067f764a4283e889" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.798154 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mglrx"] Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.814822 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mglrx"] Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.826702 4707 scope.go:117] "RemoveContainer" containerID="65a1fbadf1f651b71f7a52a2f4de9cc6875e886a8912334e52a669372849b9d1" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.842396 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-75wd8"] Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.847414 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-75wd8"] Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.848219 4707 scope.go:117] "RemoveContainer" containerID="31385c6b5afaf3726b94fef5381c862e94150b8441e088e2ae505ac0ab02f73c" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.854231 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z47zb"] Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.859245 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z47zb"] Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.861401 4707 scope.go:117] "RemoveContainer" containerID="38e1dff808f8ab5ef00855b72d449ffa0ec109a306d3a31278cb3028f9312210" Dec 13 07:02:48 crc kubenswrapper[4707]: I1213 07:02:47.875449 4707 scope.go:117] "RemoveContainer" containerID="ade6202a39e82517cb363379d8dbfcb3d1c516a124c15563df88972cdd3be9a2" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.083085 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6n6sc"] Dec 13 07:02:49 crc kubenswrapper[4707]: E1213 07:02:49.083562 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11795d95-2c15-4a44-9276-6786ec1e5cc1" containerName="marketplace-operator" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.083766 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="11795d95-2c15-4a44-9276-6786ec1e5cc1" containerName="marketplace-operator" Dec 13 07:02:49 crc kubenswrapper[4707]: E1213 07:02:49.083775 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1a1a41-b86e-4de7-af57-7862986b3037" containerName="extract-utilities" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.083782 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1a1a41-b86e-4de7-af57-7862986b3037" containerName="extract-utilities" Dec 13 07:02:49 crc kubenswrapper[4707]: E1213 07:02:49.083793 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c15410b-e411-44d7-8663-d19dfc2a76b6" containerName="extract-utilities" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.083800 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c15410b-e411-44d7-8663-d19dfc2a76b6" containerName="extract-utilities" Dec 13 07:02:49 crc kubenswrapper[4707]: E1213 07:02:49.083806 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1a1a41-b86e-4de7-af57-7862986b3037" containerName="registry-server" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.083812 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1a1a41-b86e-4de7-af57-7862986b3037" containerName="registry-server" Dec 13 07:02:49 crc kubenswrapper[4707]: E1213 07:02:49.083822 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c15410b-e411-44d7-8663-d19dfc2a76b6" containerName="extract-content" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.083828 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c15410b-e411-44d7-8663-d19dfc2a76b6" containerName="extract-content" Dec 13 07:02:49 crc kubenswrapper[4707]: E1213 07:02:49.083840 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c15410b-e411-44d7-8663-d19dfc2a76b6" containerName="registry-server" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.083846 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c15410b-e411-44d7-8663-d19dfc2a76b6" containerName="registry-server" Dec 13 07:02:49 crc kubenswrapper[4707]: E1213 07:02:49.083857 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11795d95-2c15-4a44-9276-6786ec1e5cc1" containerName="marketplace-operator" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.083863 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="11795d95-2c15-4a44-9276-6786ec1e5cc1" containerName="marketplace-operator" Dec 13 07:02:49 crc kubenswrapper[4707]: E1213 07:02:49.083874 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1a1a41-b86e-4de7-af57-7862986b3037" containerName="extract-content" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.083879 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1a1a41-b86e-4de7-af57-7862986b3037" containerName="extract-content" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.084037 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="11795d95-2c15-4a44-9276-6786ec1e5cc1" containerName="marketplace-operator" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.084054 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c15410b-e411-44d7-8663-d19dfc2a76b6" containerName="registry-server" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.084062 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="11795d95-2c15-4a44-9276-6786ec1e5cc1" containerName="marketplace-operator" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.084073 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1a1a41-b86e-4de7-af57-7862986b3037" containerName="registry-server" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.084746 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6n6sc" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.088813 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.091447 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gkw57" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.108528 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.110720 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6n6sc"] Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.179929 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc6r8\" (UniqueName: \"kubernetes.io/projected/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-kube-api-access-jc6r8\") pod \"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64\" (UID: \"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64\") " Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.180007 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-utilities\") pod \"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64\" (UID: \"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64\") " Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.180033 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-catalog-content\") pod \"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64\" (UID: \"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64\") " Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.180083 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn7c4\" (UniqueName: \"kubernetes.io/projected/6f152c64-57be-4ac0-8422-d5d504f02cf8-kube-api-access-mn7c4\") pod \"6f152c64-57be-4ac0-8422-d5d504f02cf8\" (UID: \"6f152c64-57be-4ac0-8422-d5d504f02cf8\") " Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.180126 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f152c64-57be-4ac0-8422-d5d504f02cf8-utilities\") pod \"6f152c64-57be-4ac0-8422-d5d504f02cf8\" (UID: \"6f152c64-57be-4ac0-8422-d5d504f02cf8\") " Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.180160 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f152c64-57be-4ac0-8422-d5d504f02cf8-catalog-content\") pod \"6f152c64-57be-4ac0-8422-d5d504f02cf8\" (UID: \"6f152c64-57be-4ac0-8422-d5d504f02cf8\") " Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.180416 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lztbj\" (UniqueName: \"kubernetes.io/projected/7fab4da9-5bcc-452f-a764-4050a276c1c9-kube-api-access-lztbj\") pod \"certified-operators-6n6sc\" (UID: \"7fab4da9-5bcc-452f-a764-4050a276c1c9\") " pod="openshift-marketplace/certified-operators-6n6sc" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.180459 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fab4da9-5bcc-452f-a764-4050a276c1c9-utilities\") pod \"certified-operators-6n6sc\" (UID: \"7fab4da9-5bcc-452f-a764-4050a276c1c9\") " pod="openshift-marketplace/certified-operators-6n6sc" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.180532 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fab4da9-5bcc-452f-a764-4050a276c1c9-catalog-content\") pod \"certified-operators-6n6sc\" (UID: \"7fab4da9-5bcc-452f-a764-4050a276c1c9\") " pod="openshift-marketplace/certified-operators-6n6sc" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.180775 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-utilities" (OuterVolumeSpecName: "utilities") pod "bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" (UID: "bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.184056 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f152c64-57be-4ac0-8422-d5d504f02cf8-utilities" (OuterVolumeSpecName: "utilities") pod "6f152c64-57be-4ac0-8422-d5d504f02cf8" (UID: "6f152c64-57be-4ac0-8422-d5d504f02cf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.185155 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-kube-api-access-jc6r8" (OuterVolumeSpecName: "kube-api-access-jc6r8") pod "bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" (UID: "bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64"). InnerVolumeSpecName "kube-api-access-jc6r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.194076 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f152c64-57be-4ac0-8422-d5d504f02cf8-kube-api-access-mn7c4" (OuterVolumeSpecName: "kube-api-access-mn7c4") pod "6f152c64-57be-4ac0-8422-d5d504f02cf8" (UID: "6f152c64-57be-4ac0-8422-d5d504f02cf8"). InnerVolumeSpecName "kube-api-access-mn7c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.258529 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" (UID: "bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.282577 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lztbj\" (UniqueName: \"kubernetes.io/projected/7fab4da9-5bcc-452f-a764-4050a276c1c9-kube-api-access-lztbj\") pod \"certified-operators-6n6sc\" (UID: \"7fab4da9-5bcc-452f-a764-4050a276c1c9\") " pod="openshift-marketplace/certified-operators-6n6sc" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.282640 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fab4da9-5bcc-452f-a764-4050a276c1c9-utilities\") pod \"certified-operators-6n6sc\" (UID: \"7fab4da9-5bcc-452f-a764-4050a276c1c9\") " pod="openshift-marketplace/certified-operators-6n6sc" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.282706 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fab4da9-5bcc-452f-a764-4050a276c1c9-catalog-content\") pod \"certified-operators-6n6sc\" (UID: \"7fab4da9-5bcc-452f-a764-4050a276c1c9\") " pod="openshift-marketplace/certified-operators-6n6sc" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.282760 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc6r8\" (UniqueName: \"kubernetes.io/projected/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-kube-api-access-jc6r8\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.282774 4707 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.282785 4707 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.282797 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn7c4\" (UniqueName: \"kubernetes.io/projected/6f152c64-57be-4ac0-8422-d5d504f02cf8-kube-api-access-mn7c4\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.282809 4707 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f152c64-57be-4ac0-8422-d5d504f02cf8-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.283242 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fab4da9-5bcc-452f-a764-4050a276c1c9-utilities\") pod \"certified-operators-6n6sc\" (UID: \"7fab4da9-5bcc-452f-a764-4050a276c1c9\") " pod="openshift-marketplace/certified-operators-6n6sc" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.283380 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fab4da9-5bcc-452f-a764-4050a276c1c9-catalog-content\") pod \"certified-operators-6n6sc\" (UID: \"7fab4da9-5bcc-452f-a764-4050a276c1c9\") " pod="openshift-marketplace/certified-operators-6n6sc" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.309805 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lztbj\" (UniqueName: \"kubernetes.io/projected/7fab4da9-5bcc-452f-a764-4050a276c1c9-kube-api-access-lztbj\") pod \"certified-operators-6n6sc\" (UID: \"7fab4da9-5bcc-452f-a764-4050a276c1c9\") " pod="openshift-marketplace/certified-operators-6n6sc" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.340043 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f152c64-57be-4ac0-8422-d5d504f02cf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f152c64-57be-4ac0-8422-d5d504f02cf8" (UID: "6f152c64-57be-4ac0-8422-d5d504f02cf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.383668 4707 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f152c64-57be-4ac0-8422-d5d504f02cf8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.430499 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6n6sc" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.752463 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11795d95-2c15-4a44-9276-6786ec1e5cc1" path="/var/lib/kubelet/pods/11795d95-2c15-4a44-9276-6786ec1e5cc1/volumes" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.753180 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c15410b-e411-44d7-8663-d19dfc2a76b6" path="/var/lib/kubelet/pods/1c15410b-e411-44d7-8663-d19dfc2a76b6/volumes" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.754078 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb1a1a41-b86e-4de7-af57-7862986b3037" path="/var/lib/kubelet/pods/fb1a1a41-b86e-4de7-af57-7862986b3037/volumes" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.783688 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2t9v8" event={"ID":"6f152c64-57be-4ac0-8422-d5d504f02cf8","Type":"ContainerDied","Data":"871352def061307db1b690178e6c17d6a219b9919f2b5fefd08360a023f97c1f"} Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.783794 4707 scope.go:117] "RemoveContainer" containerID="1d73b24189b933fc66144a3b1daca27fbbe37c82919178579150b2e861e18ed8" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.784013 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2t9v8" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.792064 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gkw57" event={"ID":"bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64","Type":"ContainerDied","Data":"263d872a9bb16289d5f3a7de029df71a06668c8222680a9aa9dadc8a45d55265"} Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.792468 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gkw57" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.829049 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2t9v8"] Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.831356 4707 scope.go:117] "RemoveContainer" containerID="48f3cb2ac7ed0f66b4f8ef374fd6728ce0df723f61b61882489526a1ad704843" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.843030 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2t9v8"] Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.846936 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gkw57"] Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.861666 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gkw57"] Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.862183 4707 scope.go:117] "RemoveContainer" containerID="69503d6e4404c76713bfef28150f565c6ce93264f6d99605eedbba3527d1dc92" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.874893 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6n6sc"] Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.888652 4707 scope.go:117] "RemoveContainer" containerID="9822868977e799d309d5a03935630365c95ae8b2d7d3dcc70afd941b3bed2207" Dec 13 07:02:49 crc kubenswrapper[4707]: W1213 07:02:49.892016 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fab4da9_5bcc_452f_a764_4050a276c1c9.slice/crio-4a61bcb699fc66e2ef85efd93d6ade26d0280626fdba0db7cbe9ad7c0b3b7231 WatchSource:0}: Error finding container 4a61bcb699fc66e2ef85efd93d6ade26d0280626fdba0db7cbe9ad7c0b3b7231: Status 404 returned error can't find the container with id 4a61bcb699fc66e2ef85efd93d6ade26d0280626fdba0db7cbe9ad7c0b3b7231 Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.914000 4707 scope.go:117] "RemoveContainer" containerID="a1d4d085b4ff9d8eb25bf5804607605b1d3480bcfb7bcb8a44af411f571a8a7d" Dec 13 07:02:49 crc kubenswrapper[4707]: I1213 07:02:49.928993 4707 scope.go:117] "RemoveContainer" containerID="7d9025b141ae3f90bef335d2ea4bf47329339720401a51c0aa50d905e069c19e" Dec 13 07:02:50 crc kubenswrapper[4707]: I1213 07:02:50.801400 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n6sc" event={"ID":"7fab4da9-5bcc-452f-a764-4050a276c1c9","Type":"ContainerStarted","Data":"4a61bcb699fc66e2ef85efd93d6ade26d0280626fdba0db7cbe9ad7c0b3b7231"} Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.482622 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5tfbq"] Dec 13 07:02:51 crc kubenswrapper[4707]: E1213 07:02:51.483118 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f152c64-57be-4ac0-8422-d5d504f02cf8" containerName="registry-server" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.483134 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f152c64-57be-4ac0-8422-d5d504f02cf8" containerName="registry-server" Dec 13 07:02:51 crc kubenswrapper[4707]: E1213 07:02:51.483147 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f152c64-57be-4ac0-8422-d5d504f02cf8" containerName="extract-utilities" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.483154 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f152c64-57be-4ac0-8422-d5d504f02cf8" containerName="extract-utilities" Dec 13 07:02:51 crc kubenswrapper[4707]: E1213 07:02:51.483167 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" containerName="registry-server" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.483173 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" containerName="registry-server" Dec 13 07:02:51 crc kubenswrapper[4707]: E1213 07:02:51.483180 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" containerName="extract-utilities" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.483187 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" containerName="extract-utilities" Dec 13 07:02:51 crc kubenswrapper[4707]: E1213 07:02:51.483204 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" containerName="extract-content" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.483211 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" containerName="extract-content" Dec 13 07:02:51 crc kubenswrapper[4707]: E1213 07:02:51.483218 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f152c64-57be-4ac0-8422-d5d504f02cf8" containerName="extract-content" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.483224 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f152c64-57be-4ac0-8422-d5d504f02cf8" containerName="extract-content" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.483307 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f152c64-57be-4ac0-8422-d5d504f02cf8" containerName="registry-server" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.483318 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" containerName="registry-server" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.484006 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5tfbq" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.492920 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.496467 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5tfbq"] Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.515138 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11257d1f-9513-4baf-a682-819b89ebe8eb-catalog-content\") pod \"redhat-marketplace-5tfbq\" (UID: \"11257d1f-9513-4baf-a682-819b89ebe8eb\") " pod="openshift-marketplace/redhat-marketplace-5tfbq" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.515182 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11257d1f-9513-4baf-a682-819b89ebe8eb-utilities\") pod \"redhat-marketplace-5tfbq\" (UID: \"11257d1f-9513-4baf-a682-819b89ebe8eb\") " pod="openshift-marketplace/redhat-marketplace-5tfbq" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.515211 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92wnl\" (UniqueName: \"kubernetes.io/projected/11257d1f-9513-4baf-a682-819b89ebe8eb-kube-api-access-92wnl\") pod \"redhat-marketplace-5tfbq\" (UID: \"11257d1f-9513-4baf-a682-819b89ebe8eb\") " pod="openshift-marketplace/redhat-marketplace-5tfbq" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.616932 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11257d1f-9513-4baf-a682-819b89ebe8eb-catalog-content\") pod \"redhat-marketplace-5tfbq\" (UID: \"11257d1f-9513-4baf-a682-819b89ebe8eb\") " pod="openshift-marketplace/redhat-marketplace-5tfbq" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.617023 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11257d1f-9513-4baf-a682-819b89ebe8eb-utilities\") pod \"redhat-marketplace-5tfbq\" (UID: \"11257d1f-9513-4baf-a682-819b89ebe8eb\") " pod="openshift-marketplace/redhat-marketplace-5tfbq" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.617066 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92wnl\" (UniqueName: \"kubernetes.io/projected/11257d1f-9513-4baf-a682-819b89ebe8eb-kube-api-access-92wnl\") pod \"redhat-marketplace-5tfbq\" (UID: \"11257d1f-9513-4baf-a682-819b89ebe8eb\") " pod="openshift-marketplace/redhat-marketplace-5tfbq" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.617598 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11257d1f-9513-4baf-a682-819b89ebe8eb-utilities\") pod \"redhat-marketplace-5tfbq\" (UID: \"11257d1f-9513-4baf-a682-819b89ebe8eb\") " pod="openshift-marketplace/redhat-marketplace-5tfbq" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.617728 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11257d1f-9513-4baf-a682-819b89ebe8eb-catalog-content\") pod \"redhat-marketplace-5tfbq\" (UID: \"11257d1f-9513-4baf-a682-819b89ebe8eb\") " pod="openshift-marketplace/redhat-marketplace-5tfbq" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.651557 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92wnl\" (UniqueName: \"kubernetes.io/projected/11257d1f-9513-4baf-a682-819b89ebe8eb-kube-api-access-92wnl\") pod \"redhat-marketplace-5tfbq\" (UID: \"11257d1f-9513-4baf-a682-819b89ebe8eb\") " pod="openshift-marketplace/redhat-marketplace-5tfbq" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.685835 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-thb7x"] Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.686737 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thb7x" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.688580 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.696546 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-thb7x"] Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.718654 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2bdec10-6902-496b-b5a3-12fa77db4bf2-catalog-content\") pod \"redhat-operators-thb7x\" (UID: \"b2bdec10-6902-496b-b5a3-12fa77db4bf2\") " pod="openshift-marketplace/redhat-operators-thb7x" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.718721 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2bdec10-6902-496b-b5a3-12fa77db4bf2-utilities\") pod \"redhat-operators-thb7x\" (UID: \"b2bdec10-6902-496b-b5a3-12fa77db4bf2\") " pod="openshift-marketplace/redhat-operators-thb7x" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.718749 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8d22\" (UniqueName: \"kubernetes.io/projected/b2bdec10-6902-496b-b5a3-12fa77db4bf2-kube-api-access-x8d22\") pod \"redhat-operators-thb7x\" (UID: \"b2bdec10-6902-496b-b5a3-12fa77db4bf2\") " pod="openshift-marketplace/redhat-operators-thb7x" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.755987 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f152c64-57be-4ac0-8422-d5d504f02cf8" path="/var/lib/kubelet/pods/6f152c64-57be-4ac0-8422-d5d504f02cf8/volumes" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.761647 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64" path="/var/lib/kubelet/pods/bafc4eb8-3dc7-48fc-87fb-68a4a7f14c64/volumes" Dec 13 07:02:51 crc kubenswrapper[4707]: W1213 07:02:51.784141 4707 container.go:586] Failed to update stats for container "/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fab4da9_5bcc_452f_a764_4050a276c1c9.slice/crio-4a61bcb699fc66e2ef85efd93d6ade26d0280626fdba0db7cbe9ad7c0b3b7231": error while statting cgroup v2: [read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fab4da9_5bcc_452f_a764_4050a276c1c9.slice/crio-4a61bcb699fc66e2ef85efd93d6ade26d0280626fdba0db7cbe9ad7c0b3b7231/pids.current: no such device], continuing to push stats Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.807484 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.809233 4707 generic.go:334] "Generic (PLEG): container finished" podID="7fab4da9-5bcc-452f-a764-4050a276c1c9" containerID="5c1d91db5e18b40ab121d4bd31e0a599080601863bdd39bf82aaa80b724d8150" exitCode=0 Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.809271 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n6sc" event={"ID":"7fab4da9-5bcc-452f-a764-4050a276c1c9","Type":"ContainerDied","Data":"5c1d91db5e18b40ab121d4bd31e0a599080601863bdd39bf82aaa80b724d8150"} Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.816702 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5tfbq" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.820092 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2bdec10-6902-496b-b5a3-12fa77db4bf2-utilities\") pod \"redhat-operators-thb7x\" (UID: \"b2bdec10-6902-496b-b5a3-12fa77db4bf2\") " pod="openshift-marketplace/redhat-operators-thb7x" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.820133 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8d22\" (UniqueName: \"kubernetes.io/projected/b2bdec10-6902-496b-b5a3-12fa77db4bf2-kube-api-access-x8d22\") pod \"redhat-operators-thb7x\" (UID: \"b2bdec10-6902-496b-b5a3-12fa77db4bf2\") " pod="openshift-marketplace/redhat-operators-thb7x" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.820176 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2bdec10-6902-496b-b5a3-12fa77db4bf2-catalog-content\") pod \"redhat-operators-thb7x\" (UID: \"b2bdec10-6902-496b-b5a3-12fa77db4bf2\") " pod="openshift-marketplace/redhat-operators-thb7x" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.820503 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2bdec10-6902-496b-b5a3-12fa77db4bf2-catalog-content\") pod \"redhat-operators-thb7x\" (UID: \"b2bdec10-6902-496b-b5a3-12fa77db4bf2\") " pod="openshift-marketplace/redhat-operators-thb7x" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.820712 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2bdec10-6902-496b-b5a3-12fa77db4bf2-utilities\") pod \"redhat-operators-thb7x\" (UID: \"b2bdec10-6902-496b-b5a3-12fa77db4bf2\") " pod="openshift-marketplace/redhat-operators-thb7x" Dec 13 07:02:51 crc kubenswrapper[4707]: I1213 07:02:51.838970 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8d22\" (UniqueName: \"kubernetes.io/projected/b2bdec10-6902-496b-b5a3-12fa77db4bf2-kube-api-access-x8d22\") pod \"redhat-operators-thb7x\" (UID: \"b2bdec10-6902-496b-b5a3-12fa77db4bf2\") " pod="openshift-marketplace/redhat-operators-thb7x" Dec 13 07:02:52 crc kubenswrapper[4707]: I1213 07:02:52.017313 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 13 07:02:52 crc kubenswrapper[4707]: I1213 07:02:52.026079 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-thb7x" Dec 13 07:02:52 crc kubenswrapper[4707]: I1213 07:02:52.222380 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5tfbq"] Dec 13 07:02:52 crc kubenswrapper[4707]: W1213 07:02:52.227791 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11257d1f_9513_4baf_a682_819b89ebe8eb.slice/crio-177e9798127aa10d2b096d77b9f8a6dea5830d714d68a1e1b0f331db03e3be4d WatchSource:0}: Error finding container 177e9798127aa10d2b096d77b9f8a6dea5830d714d68a1e1b0f331db03e3be4d: Status 404 returned error can't find the container with id 177e9798127aa10d2b096d77b9f8a6dea5830d714d68a1e1b0f331db03e3be4d Dec 13 07:02:52 crc kubenswrapper[4707]: I1213 07:02:52.427628 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-thb7x"] Dec 13 07:02:52 crc kubenswrapper[4707]: W1213 07:02:52.435568 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2bdec10_6902_496b_b5a3_12fa77db4bf2.slice/crio-e1a7632ca17f5430d352273be251230d7d3b7f50010188e7f7388cd46417cc68 WatchSource:0}: Error finding container e1a7632ca17f5430d352273be251230d7d3b7f50010188e7f7388cd46417cc68: Status 404 returned error can't find the container with id e1a7632ca17f5430d352273be251230d7d3b7f50010188e7f7388cd46417cc68 Dec 13 07:02:52 crc kubenswrapper[4707]: I1213 07:02:52.815806 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thb7x" event={"ID":"b2bdec10-6902-496b-b5a3-12fa77db4bf2","Type":"ContainerStarted","Data":"e1a7632ca17f5430d352273be251230d7d3b7f50010188e7f7388cd46417cc68"} Dec 13 07:02:52 crc kubenswrapper[4707]: I1213 07:02:52.816865 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5tfbq" event={"ID":"11257d1f-9513-4baf-a682-819b89ebe8eb","Type":"ContainerStarted","Data":"177e9798127aa10d2b096d77b9f8a6dea5830d714d68a1e1b0f331db03e3be4d"} Dec 13 07:02:52 crc kubenswrapper[4707]: I1213 07:02:52.819065 4707 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 13 07:02:53 crc kubenswrapper[4707]: I1213 07:02:53.347485 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 07:02:53 crc kubenswrapper[4707]: I1213 07:02:53.347546 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 07:02:53 crc kubenswrapper[4707]: I1213 07:02:53.347591 4707 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 07:02:53 crc kubenswrapper[4707]: I1213 07:02:53.348031 4707 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f95c4d54b8af173db296a2e5d0f55ada85a6b3bff3ad85c18367cf3d7c82b54d"} pod="openshift-machine-config-operator/machine-config-daemon-ns45v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 07:02:53 crc kubenswrapper[4707]: I1213 07:02:53.348094 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" containerID="cri-o://f95c4d54b8af173db296a2e5d0f55ada85a6b3bff3ad85c18367cf3d7c82b54d" gracePeriod=600 Dec 13 07:02:53 crc kubenswrapper[4707]: I1213 07:02:53.821916 4707 generic.go:334] "Generic (PLEG): container finished" podID="11257d1f-9513-4baf-a682-819b89ebe8eb" containerID="5f95e5b969e1267c705bb44f25a473fd51f82c8f643fbe9b42949b07c3b40a4b" exitCode=0 Dec 13 07:02:53 crc kubenswrapper[4707]: I1213 07:02:53.822085 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5tfbq" event={"ID":"11257d1f-9513-4baf-a682-819b89ebe8eb","Type":"ContainerDied","Data":"5f95e5b969e1267c705bb44f25a473fd51f82c8f643fbe9b42949b07c3b40a4b"} Dec 13 07:02:53 crc kubenswrapper[4707]: I1213 07:02:53.823186 4707 generic.go:334] "Generic (PLEG): container finished" podID="b2bdec10-6902-496b-b5a3-12fa77db4bf2" containerID="9f9590ef8edbecb1311c370b2ababafc1f670abf040faacfeb5889933f83f3e7" exitCode=0 Dec 13 07:02:53 crc kubenswrapper[4707]: I1213 07:02:53.823216 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thb7x" event={"ID":"b2bdec10-6902-496b-b5a3-12fa77db4bf2","Type":"ContainerDied","Data":"9f9590ef8edbecb1311c370b2ababafc1f670abf040faacfeb5889933f83f3e7"} Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.086826 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wr87k"] Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.088062 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wr87k" Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.092512 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.093527 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wr87k"] Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.150547 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c-utilities\") pod \"community-operators-wr87k\" (UID: \"7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c\") " pod="openshift-marketplace/community-operators-wr87k" Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.150621 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fhvz\" (UniqueName: \"kubernetes.io/projected/7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c-kube-api-access-6fhvz\") pod \"community-operators-wr87k\" (UID: \"7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c\") " pod="openshift-marketplace/community-operators-wr87k" Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.150650 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c-catalog-content\") pod \"community-operators-wr87k\" (UID: \"7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c\") " pod="openshift-marketplace/community-operators-wr87k" Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.265317 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c-utilities\") pod \"community-operators-wr87k\" (UID: \"7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c\") " pod="openshift-marketplace/community-operators-wr87k" Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.265800 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fhvz\" (UniqueName: \"kubernetes.io/projected/7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c-kube-api-access-6fhvz\") pod \"community-operators-wr87k\" (UID: \"7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c\") " pod="openshift-marketplace/community-operators-wr87k" Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.265896 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c-catalog-content\") pod \"community-operators-wr87k\" (UID: \"7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c\") " pod="openshift-marketplace/community-operators-wr87k" Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.266048 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c-utilities\") pod \"community-operators-wr87k\" (UID: \"7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c\") " pod="openshift-marketplace/community-operators-wr87k" Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.266502 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c-catalog-content\") pod \"community-operators-wr87k\" (UID: \"7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c\") " pod="openshift-marketplace/community-operators-wr87k" Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.287365 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fhvz\" (UniqueName: \"kubernetes.io/projected/7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c-kube-api-access-6fhvz\") pod \"community-operators-wr87k\" (UID: \"7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c\") " pod="openshift-marketplace/community-operators-wr87k" Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.408862 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wr87k" Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.830128 4707 generic.go:334] "Generic (PLEG): container finished" podID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerID="f95c4d54b8af173db296a2e5d0f55ada85a6b3bff3ad85c18367cf3d7c82b54d" exitCode=0 Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.830785 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" event={"ID":"5d5a779a-7592-4da7-950b-3a0679d7a4bf","Type":"ContainerDied","Data":"f95c4d54b8af173db296a2e5d0f55ada85a6b3bff3ad85c18367cf3d7c82b54d"} Dec 13 07:02:54 crc kubenswrapper[4707]: I1213 07:02:54.830816 4707 scope.go:117] "RemoveContainer" containerID="07bab14d70cf4aec6174e412c18e36ec4459ceec36c5b09bffa55fe19dcfc1a2" Dec 13 07:02:57 crc kubenswrapper[4707]: I1213 07:02:57.206310 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wr87k"] Dec 13 07:02:57 crc kubenswrapper[4707]: W1213 07:02:57.211318 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7baec2ab_aeee_4166_9fd6_8a3b92a3aa5c.slice/crio-b4536d4befac6ba708bd46f73c60b9bebaf2fdf0316da0593356ef212c6b75ad WatchSource:0}: Error finding container b4536d4befac6ba708bd46f73c60b9bebaf2fdf0316da0593356ef212c6b75ad: Status 404 returned error can't find the container with id b4536d4befac6ba708bd46f73c60b9bebaf2fdf0316da0593356ef212c6b75ad Dec 13 07:02:57 crc kubenswrapper[4707]: I1213 07:02:57.844877 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n6sc" event={"ID":"7fab4da9-5bcc-452f-a764-4050a276c1c9","Type":"ContainerStarted","Data":"720de4f2a6d22f2a817d113873d71f41ab06ecee5611f04710f210c1986677fc"} Dec 13 07:02:57 crc kubenswrapper[4707]: I1213 07:02:57.849438 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wr87k" event={"ID":"7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c","Type":"ContainerStarted","Data":"b4536d4befac6ba708bd46f73c60b9bebaf2fdf0316da0593356ef212c6b75ad"} Dec 13 07:02:57 crc kubenswrapper[4707]: I1213 07:02:57.983925 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-995mf" podUID="b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e" containerName="registry" containerID="cri-o://46be07b69cf10024389cc3ec5ea249a9d7283435d6f8404f8d800c3ac15a59b3" gracePeriod=30 Dec 13 07:02:58 crc kubenswrapper[4707]: E1213 07:02:58.169759 4707 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fab4da9_5bcc_452f_a764_4050a276c1c9.slice/crio-4a61bcb699fc66e2ef85efd93d6ade26d0280626fdba0db7cbe9ad7c0b3b7231\": RecentStats: unable to find data in memory cache]" Dec 13 07:02:58 crc kubenswrapper[4707]: I1213 07:02:58.856892 4707 generic.go:334] "Generic (PLEG): container finished" podID="7fab4da9-5bcc-452f-a764-4050a276c1c9" containerID="720de4f2a6d22f2a817d113873d71f41ab06ecee5611f04710f210c1986677fc" exitCode=0 Dec 13 07:02:58 crc kubenswrapper[4707]: I1213 07:02:58.857102 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n6sc" event={"ID":"7fab4da9-5bcc-452f-a764-4050a276c1c9","Type":"ContainerDied","Data":"720de4f2a6d22f2a817d113873d71f41ab06ecee5611f04710f210c1986677fc"} Dec 13 07:02:58 crc kubenswrapper[4707]: I1213 07:02:58.863636 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" event={"ID":"5d5a779a-7592-4da7-950b-3a0679d7a4bf","Type":"ContainerStarted","Data":"6fd2276cf424ca8b43506f8af53f629147ab967daf2433a379229b822db2b6d7"} Dec 13 07:02:58 crc kubenswrapper[4707]: I1213 07:02:58.868995 4707 generic.go:334] "Generic (PLEG): container finished" podID="b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e" containerID="46be07b69cf10024389cc3ec5ea249a9d7283435d6f8404f8d800c3ac15a59b3" exitCode=0 Dec 13 07:02:58 crc kubenswrapper[4707]: I1213 07:02:58.869099 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-995mf" event={"ID":"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e","Type":"ContainerDied","Data":"46be07b69cf10024389cc3ec5ea249a9d7283435d6f8404f8d800c3ac15a59b3"} Dec 13 07:02:58 crc kubenswrapper[4707]: I1213 07:02:58.871036 4707 generic.go:334] "Generic (PLEG): container finished" podID="7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c" containerID="b88656e56792965b7859e6de5a1f7658845f614e5d6ba476bdd28a8eaae759cb" exitCode=0 Dec 13 07:02:58 crc kubenswrapper[4707]: I1213 07:02:58.871073 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wr87k" event={"ID":"7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c","Type":"ContainerDied","Data":"b88656e56792965b7859e6de5a1f7658845f614e5d6ba476bdd28a8eaae759cb"} Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.481233 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.529549 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-registry-certificates\") pod \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.529703 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-ca-trust-extracted\") pod \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.529777 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-bound-sa-token\") pod \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.529895 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.529939 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-installation-pull-secrets\") pod \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.529979 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-registry-tls\") pod \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.530005 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-trusted-ca\") pod \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.530025 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8mg6\" (UniqueName: \"kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-kube-api-access-w8mg6\") pod \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\" (UID: \"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e\") " Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.530400 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.530415 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.537681 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.542119 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-kube-api-access-w8mg6" (OuterVolumeSpecName: "kube-api-access-w8mg6") pod "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e"). InnerVolumeSpecName "kube-api-access-w8mg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.542845 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.543226 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.549022 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.554811 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e" (UID: "b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.631141 4707 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.631169 4707 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.631179 4707 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.631206 4707 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.631217 4707 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.631225 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8mg6\" (UniqueName: \"kubernetes.io/projected/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-kube-api-access-w8mg6\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.631234 4707 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.876417 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-995mf" event={"ID":"b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e","Type":"ContainerDied","Data":"a992605143d1d4000235aed84a993664fbedb1af2b927f8403e985bf38685f71"} Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.876485 4707 scope.go:117] "RemoveContainer" containerID="46be07b69cf10024389cc3ec5ea249a9d7283435d6f8404f8d800c3ac15a59b3" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.876490 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-995mf" Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.911444 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-995mf"] Dec 13 07:02:59 crc kubenswrapper[4707]: I1213 07:02:59.918624 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-995mf"] Dec 13 07:03:01 crc kubenswrapper[4707]: I1213 07:03:01.751628 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e" path="/var/lib/kubelet/pods/b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e/volumes" Dec 13 07:03:01 crc kubenswrapper[4707]: I1213 07:03:01.892744 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wr87k" event={"ID":"7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c","Type":"ContainerStarted","Data":"7697b195fe367f7d384875a8ece688d1845aeb2bece78c4dee26f841fa5c444f"} Dec 13 07:03:01 crc kubenswrapper[4707]: I1213 07:03:01.896266 4707 generic.go:334] "Generic (PLEG): container finished" podID="11257d1f-9513-4baf-a682-819b89ebe8eb" containerID="af1da7300feabb870cd0e44c9c1096b56e53da1eefe8d142afcc8682dfb5c3bb" exitCode=0 Dec 13 07:03:01 crc kubenswrapper[4707]: I1213 07:03:01.896304 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5tfbq" event={"ID":"11257d1f-9513-4baf-a682-819b89ebe8eb","Type":"ContainerDied","Data":"af1da7300feabb870cd0e44c9c1096b56e53da1eefe8d142afcc8682dfb5c3bb"} Dec 13 07:03:01 crc kubenswrapper[4707]: I1213 07:03:01.899451 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n6sc" event={"ID":"7fab4da9-5bcc-452f-a764-4050a276c1c9","Type":"ContainerStarted","Data":"2e3fd0331b4a75d124ee46d3467081ce848bb84d4648ecbf9553896d813dc0a7"} Dec 13 07:03:01 crc kubenswrapper[4707]: I1213 07:03:01.902299 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thb7x" event={"ID":"b2bdec10-6902-496b-b5a3-12fa77db4bf2","Type":"ContainerStarted","Data":"9e923545b95e47a43dd4400c325911afb44a20b471d736b662003e86abdf15b9"} Dec 13 07:03:01 crc kubenswrapper[4707]: I1213 07:03:01.959698 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6n6sc" podStartSLOduration=4.703506746 podStartE2EDuration="12.959681308s" podCreationTimestamp="2025-12-13 07:02:49 +0000 UTC" firstStartedPulling="2025-12-13 07:02:52.81855614 +0000 UTC m=+481.315735237" lastFinishedPulling="2025-12-13 07:03:01.074730752 +0000 UTC m=+489.571909799" observedRunningTime="2025-12-13 07:03:01.939976161 +0000 UTC m=+490.437155418" watchObservedRunningTime="2025-12-13 07:03:01.959681308 +0000 UTC m=+490.456860355" Dec 13 07:03:02 crc kubenswrapper[4707]: I1213 07:03:02.925109 4707 generic.go:334] "Generic (PLEG): container finished" podID="b2bdec10-6902-496b-b5a3-12fa77db4bf2" containerID="9e923545b95e47a43dd4400c325911afb44a20b471d736b662003e86abdf15b9" exitCode=0 Dec 13 07:03:02 crc kubenswrapper[4707]: I1213 07:03:02.925192 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thb7x" event={"ID":"b2bdec10-6902-496b-b5a3-12fa77db4bf2","Type":"ContainerDied","Data":"9e923545b95e47a43dd4400c325911afb44a20b471d736b662003e86abdf15b9"} Dec 13 07:03:02 crc kubenswrapper[4707]: I1213 07:03:02.934930 4707 generic.go:334] "Generic (PLEG): container finished" podID="7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c" containerID="7697b195fe367f7d384875a8ece688d1845aeb2bece78c4dee26f841fa5c444f" exitCode=0 Dec 13 07:03:02 crc kubenswrapper[4707]: I1213 07:03:02.935542 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wr87k" event={"ID":"7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c","Type":"ContainerDied","Data":"7697b195fe367f7d384875a8ece688d1845aeb2bece78c4dee26f841fa5c444f"} Dec 13 07:03:04 crc kubenswrapper[4707]: I1213 07:03:04.949227 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-thb7x" event={"ID":"b2bdec10-6902-496b-b5a3-12fa77db4bf2","Type":"ContainerStarted","Data":"be1c866c2f8730a8941603f8cbe1fb7a78faf0874516842e3da73ac28d6bd8e1"} Dec 13 07:03:04 crc kubenswrapper[4707]: I1213 07:03:04.950632 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wr87k" event={"ID":"7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c","Type":"ContainerStarted","Data":"17a171dcdef45ecaf940e595b037cdf82b9efff6b519c797e01b98bdfac636ba"} Dec 13 07:03:04 crc kubenswrapper[4707]: I1213 07:03:04.992902 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-thb7x" podStartSLOduration=6.667437027 podStartE2EDuration="13.99286595s" podCreationTimestamp="2025-12-13 07:02:51 +0000 UTC" firstStartedPulling="2025-12-13 07:02:56.750586803 +0000 UTC m=+485.247765850" lastFinishedPulling="2025-12-13 07:03:04.076015726 +0000 UTC m=+492.573194773" observedRunningTime="2025-12-13 07:03:04.972252749 +0000 UTC m=+493.469431806" watchObservedRunningTime="2025-12-13 07:03:04.99286595 +0000 UTC m=+493.490044997" Dec 13 07:03:04 crc kubenswrapper[4707]: I1213 07:03:04.994258 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wr87k" podStartSLOduration=7.88512651 podStartE2EDuration="10.994252656s" podCreationTimestamp="2025-12-13 07:02:54 +0000 UTC" firstStartedPulling="2025-12-13 07:03:00.754550405 +0000 UTC m=+489.251729472" lastFinishedPulling="2025-12-13 07:03:03.863676571 +0000 UTC m=+492.360855618" observedRunningTime="2025-12-13 07:03:04.990979511 +0000 UTC m=+493.488158558" watchObservedRunningTime="2025-12-13 07:03:04.994252656 +0000 UTC m=+493.491431703" Dec 13 07:03:08 crc kubenswrapper[4707]: E1213 07:03:08.291983 4707 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fab4da9_5bcc_452f_a764_4050a276c1c9.slice/crio-4a61bcb699fc66e2ef85efd93d6ade26d0280626fdba0db7cbe9ad7c0b3b7231\": RecentStats: unable to find data in memory cache]" Dec 13 07:03:09 crc kubenswrapper[4707]: I1213 07:03:09.430731 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6n6sc" Dec 13 07:03:09 crc kubenswrapper[4707]: I1213 07:03:09.431621 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6n6sc" Dec 13 07:03:09 crc kubenswrapper[4707]: I1213 07:03:09.480936 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6n6sc" Dec 13 07:03:10 crc kubenswrapper[4707]: I1213 07:03:10.037551 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6n6sc" Dec 13 07:03:12 crc kubenswrapper[4707]: I1213 07:03:12.026254 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-thb7x" Dec 13 07:03:12 crc kubenswrapper[4707]: I1213 07:03:12.026828 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-thb7x" Dec 13 07:03:12 crc kubenswrapper[4707]: I1213 07:03:12.071618 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-thb7x" Dec 13 07:03:13 crc kubenswrapper[4707]: I1213 07:03:13.040625 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-thb7x" Dec 13 07:03:14 crc kubenswrapper[4707]: I1213 07:03:14.409586 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wr87k" Dec 13 07:03:14 crc kubenswrapper[4707]: I1213 07:03:14.410000 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wr87k" Dec 13 07:03:14 crc kubenswrapper[4707]: I1213 07:03:14.461220 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wr87k" Dec 13 07:03:15 crc kubenswrapper[4707]: I1213 07:03:15.459080 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wr87k" Dec 13 07:03:16 crc kubenswrapper[4707]: I1213 07:03:16.412415 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5tfbq" event={"ID":"11257d1f-9513-4baf-a682-819b89ebe8eb","Type":"ContainerStarted","Data":"909865e3c45053ba139338a1ac9e42ee59d3c8ff01955f7b7523f2451e11c577"} Dec 13 07:03:16 crc kubenswrapper[4707]: I1213 07:03:16.441254 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5tfbq" podStartSLOduration=6.853753593 podStartE2EDuration="25.441230275s" podCreationTimestamp="2025-12-13 07:02:51 +0000 UTC" firstStartedPulling="2025-12-13 07:02:56.745160301 +0000 UTC m=+485.242339348" lastFinishedPulling="2025-12-13 07:03:15.332636983 +0000 UTC m=+503.829816030" observedRunningTime="2025-12-13 07:03:16.43122941 +0000 UTC m=+504.928408457" watchObservedRunningTime="2025-12-13 07:03:16.441230275 +0000 UTC m=+504.938409323" Dec 13 07:03:21 crc kubenswrapper[4707]: I1213 07:03:21.818057 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5tfbq" Dec 13 07:03:21 crc kubenswrapper[4707]: I1213 07:03:21.819155 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5tfbq" Dec 13 07:03:21 crc kubenswrapper[4707]: I1213 07:03:21.871527 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5tfbq" Dec 13 07:03:22 crc kubenswrapper[4707]: I1213 07:03:22.490945 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5tfbq" Dec 13 07:05:23 crc kubenswrapper[4707]: I1213 07:05:23.348095 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 07:05:23 crc kubenswrapper[4707]: I1213 07:05:23.348766 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 07:05:53 crc kubenswrapper[4707]: I1213 07:05:53.348530 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 07:05:53 crc kubenswrapper[4707]: I1213 07:05:53.349396 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 07:06:23 crc kubenswrapper[4707]: I1213 07:06:23.348218 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 07:06:23 crc kubenswrapper[4707]: I1213 07:06:23.348696 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 07:06:23 crc kubenswrapper[4707]: I1213 07:06:23.348738 4707 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 07:06:23 crc kubenswrapper[4707]: I1213 07:06:23.349433 4707 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6fd2276cf424ca8b43506f8af53f629147ab967daf2433a379229b822db2b6d7"} pod="openshift-machine-config-operator/machine-config-daemon-ns45v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 07:06:23 crc kubenswrapper[4707]: I1213 07:06:23.349492 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" containerID="cri-o://6fd2276cf424ca8b43506f8af53f629147ab967daf2433a379229b822db2b6d7" gracePeriod=600 Dec 13 07:06:23 crc kubenswrapper[4707]: I1213 07:06:23.513305 4707 generic.go:334] "Generic (PLEG): container finished" podID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerID="6fd2276cf424ca8b43506f8af53f629147ab967daf2433a379229b822db2b6d7" exitCode=0 Dec 13 07:06:23 crc kubenswrapper[4707]: I1213 07:06:23.513380 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" event={"ID":"5d5a779a-7592-4da7-950b-3a0679d7a4bf","Type":"ContainerDied","Data":"6fd2276cf424ca8b43506f8af53f629147ab967daf2433a379229b822db2b6d7"} Dec 13 07:06:23 crc kubenswrapper[4707]: I1213 07:06:23.513676 4707 scope.go:117] "RemoveContainer" containerID="f95c4d54b8af173db296a2e5d0f55ada85a6b3bff3ad85c18367cf3d7c82b54d" Dec 13 07:06:24 crc kubenswrapper[4707]: I1213 07:06:24.522513 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" event={"ID":"5d5a779a-7592-4da7-950b-3a0679d7a4bf","Type":"ContainerStarted","Data":"3b0f48925884135b2d976248bf9fa53c361bfaa4e83eae2a7f0b3a53884319c7"} Dec 13 07:06:45 crc kubenswrapper[4707]: I1213 07:06:45.949012 4707 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 13 07:08:23 crc kubenswrapper[4707]: I1213 07:08:23.347556 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 07:08:23 crc kubenswrapper[4707]: I1213 07:08:23.348374 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 07:08:53 crc kubenswrapper[4707]: I1213 07:08:53.348005 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 07:08:53 crc kubenswrapper[4707]: I1213 07:08:53.348708 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 07:09:23 crc kubenswrapper[4707]: I1213 07:09:23.347923 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 07:09:23 crc kubenswrapper[4707]: I1213 07:09:23.349026 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 07:09:23 crc kubenswrapper[4707]: I1213 07:09:23.349214 4707 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 07:09:23 crc kubenswrapper[4707]: I1213 07:09:23.349815 4707 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b0f48925884135b2d976248bf9fa53c361bfaa4e83eae2a7f0b3a53884319c7"} pod="openshift-machine-config-operator/machine-config-daemon-ns45v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 07:09:23 crc kubenswrapper[4707]: I1213 07:09:23.349875 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" containerID="cri-o://3b0f48925884135b2d976248bf9fa53c361bfaa4e83eae2a7f0b3a53884319c7" gracePeriod=600 Dec 13 07:09:24 crc kubenswrapper[4707]: I1213 07:09:24.542925 4707 generic.go:334] "Generic (PLEG): container finished" podID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerID="3b0f48925884135b2d976248bf9fa53c361bfaa4e83eae2a7f0b3a53884319c7" exitCode=0 Dec 13 07:09:24 crc kubenswrapper[4707]: I1213 07:09:24.542980 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" event={"ID":"5d5a779a-7592-4da7-950b-3a0679d7a4bf","Type":"ContainerDied","Data":"3b0f48925884135b2d976248bf9fa53c361bfaa4e83eae2a7f0b3a53884319c7"} Dec 13 07:09:24 crc kubenswrapper[4707]: I1213 07:09:24.543323 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" event={"ID":"5d5a779a-7592-4da7-950b-3a0679d7a4bf","Type":"ContainerStarted","Data":"fe978c3da49e49b18f17beb4fbb2341d90728ac6f7434b6a66e7d821f3fdc208"} Dec 13 07:09:24 crc kubenswrapper[4707]: I1213 07:09:24.543341 4707 scope.go:117] "RemoveContainer" containerID="6fd2276cf424ca8b43506f8af53f629147ab967daf2433a379229b822db2b6d7" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.176922 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-r6xpb"] Dec 13 07:09:54 crc kubenswrapper[4707]: E1213 07:09:54.178761 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e" containerName="registry" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.178865 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e" containerName="registry" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.179118 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0aa6223-2c6c-4c37-bd8f-b2a32de5a42e" containerName="registry" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.179649 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-r6xpb" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.184121 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.184138 4707 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-bd2hb" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.184127 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.195524 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-r6xpb"] Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.200500 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-d67ww"] Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.201412 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-d67ww" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.206335 4707 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-qzdfx" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.219569 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-x6nvs"] Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.220480 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-x6nvs" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.223041 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-d67ww"] Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.223065 4707 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-c8ff6" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.230059 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-x6nvs"] Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.302619 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7crd\" (UniqueName: \"kubernetes.io/projected/7f59dcbe-3c45-493b-8a33-53dde9b70415-kube-api-access-n7crd\") pod \"cert-manager-cainjector-7f985d654d-r6xpb\" (UID: \"7f59dcbe-3c45-493b-8a33-53dde9b70415\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-r6xpb" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.302682 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tthlj\" (UniqueName: \"kubernetes.io/projected/d286b639-0e27-47ac-a85b-dbb12e6cbabe-kube-api-access-tthlj\") pod \"cert-manager-webhook-5655c58dd6-x6nvs\" (UID: \"d286b639-0e27-47ac-a85b-dbb12e6cbabe\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-x6nvs" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.302734 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snqrh\" (UniqueName: \"kubernetes.io/projected/2c36b58d-7422-4016-b37a-cbe2b73d7168-kube-api-access-snqrh\") pod \"cert-manager-5b446d88c5-d67ww\" (UID: \"2c36b58d-7422-4016-b37a-cbe2b73d7168\") " pod="cert-manager/cert-manager-5b446d88c5-d67ww" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.404250 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7crd\" (UniqueName: \"kubernetes.io/projected/7f59dcbe-3c45-493b-8a33-53dde9b70415-kube-api-access-n7crd\") pod \"cert-manager-cainjector-7f985d654d-r6xpb\" (UID: \"7f59dcbe-3c45-493b-8a33-53dde9b70415\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-r6xpb" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.404386 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tthlj\" (UniqueName: \"kubernetes.io/projected/d286b639-0e27-47ac-a85b-dbb12e6cbabe-kube-api-access-tthlj\") pod \"cert-manager-webhook-5655c58dd6-x6nvs\" (UID: \"d286b639-0e27-47ac-a85b-dbb12e6cbabe\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-x6nvs" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.404531 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snqrh\" (UniqueName: \"kubernetes.io/projected/2c36b58d-7422-4016-b37a-cbe2b73d7168-kube-api-access-snqrh\") pod \"cert-manager-5b446d88c5-d67ww\" (UID: \"2c36b58d-7422-4016-b37a-cbe2b73d7168\") " pod="cert-manager/cert-manager-5b446d88c5-d67ww" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.424086 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tthlj\" (UniqueName: \"kubernetes.io/projected/d286b639-0e27-47ac-a85b-dbb12e6cbabe-kube-api-access-tthlj\") pod \"cert-manager-webhook-5655c58dd6-x6nvs\" (UID: \"d286b639-0e27-47ac-a85b-dbb12e6cbabe\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-x6nvs" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.428291 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7crd\" (UniqueName: \"kubernetes.io/projected/7f59dcbe-3c45-493b-8a33-53dde9b70415-kube-api-access-n7crd\") pod \"cert-manager-cainjector-7f985d654d-r6xpb\" (UID: \"7f59dcbe-3c45-493b-8a33-53dde9b70415\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-r6xpb" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.430446 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snqrh\" (UniqueName: \"kubernetes.io/projected/2c36b58d-7422-4016-b37a-cbe2b73d7168-kube-api-access-snqrh\") pod \"cert-manager-5b446d88c5-d67ww\" (UID: \"2c36b58d-7422-4016-b37a-cbe2b73d7168\") " pod="cert-manager/cert-manager-5b446d88c5-d67ww" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.500196 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-r6xpb" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.524785 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-d67ww" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.537226 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-x6nvs" Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.750922 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-r6xpb"] Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.763085 4707 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 13 07:09:54 crc kubenswrapper[4707]: I1213 07:09:54.813633 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-d67ww"] Dec 13 07:09:55 crc kubenswrapper[4707]: I1213 07:09:55.076227 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-x6nvs"] Dec 13 07:09:55 crc kubenswrapper[4707]: I1213 07:09:55.706276 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-x6nvs" event={"ID":"d286b639-0e27-47ac-a85b-dbb12e6cbabe","Type":"ContainerStarted","Data":"3093ff970a7df141262a2f15dbfd47710f11012faea896c88b8aabaa520eae69"} Dec 13 07:09:55 crc kubenswrapper[4707]: I1213 07:09:55.707286 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-d67ww" event={"ID":"2c36b58d-7422-4016-b37a-cbe2b73d7168","Type":"ContainerStarted","Data":"02c1d2b8062111df67e51e03a789e37eecaffcccb342015da4b265461acca1cc"} Dec 13 07:09:55 crc kubenswrapper[4707]: I1213 07:09:55.708307 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-r6xpb" event={"ID":"7f59dcbe-3c45-493b-8a33-53dde9b70415","Type":"ContainerStarted","Data":"a5e025fa2a038e22789b16988e5379dc29add96e1a500a3d70765e8efa290410"} Dec 13 07:10:01 crc kubenswrapper[4707]: I1213 07:10:01.743724 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-d67ww" event={"ID":"2c36b58d-7422-4016-b37a-cbe2b73d7168","Type":"ContainerStarted","Data":"cd92b635d6676898110fa0b803e67d964dc16f62371d1b61656d0f41f8749cb7"} Dec 13 07:10:01 crc kubenswrapper[4707]: I1213 07:10:01.819492 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-d67ww" podStartSLOduration=1.8503822429999999 podStartE2EDuration="7.819473152s" podCreationTimestamp="2025-12-13 07:09:54 +0000 UTC" firstStartedPulling="2025-12-13 07:09:54.824143991 +0000 UTC m=+903.321323038" lastFinishedPulling="2025-12-13 07:10:00.7932349 +0000 UTC m=+909.290413947" observedRunningTime="2025-12-13 07:10:01.818605506 +0000 UTC m=+910.315784553" watchObservedRunningTime="2025-12-13 07:10:01.819473152 +0000 UTC m=+910.316652199" Dec 13 07:10:02 crc kubenswrapper[4707]: I1213 07:10:02.749973 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-r6xpb" event={"ID":"7f59dcbe-3c45-493b-8a33-53dde9b70415","Type":"ContainerStarted","Data":"848c6dc4108f2091d8e03aaeb9d6c4c0f7fb8474ad04a04a52ed69e6e6474911"} Dec 13 07:10:02 crc kubenswrapper[4707]: I1213 07:10:02.751164 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-x6nvs" event={"ID":"d286b639-0e27-47ac-a85b-dbb12e6cbabe","Type":"ContainerStarted","Data":"e80f7ab7aa64f3293da6898e2a0a020b261279aa3126ed8894a8d314540a1881"} Dec 13 07:10:02 crc kubenswrapper[4707]: I1213 07:10:02.764481 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-r6xpb" podStartSLOduration=1.5420368660000001 podStartE2EDuration="8.764457334s" podCreationTimestamp="2025-12-13 07:09:54 +0000 UTC" firstStartedPulling="2025-12-13 07:09:54.762825733 +0000 UTC m=+903.260004780" lastFinishedPulling="2025-12-13 07:10:01.985246201 +0000 UTC m=+910.482425248" observedRunningTime="2025-12-13 07:10:02.762190474 +0000 UTC m=+911.259369521" watchObservedRunningTime="2025-12-13 07:10:02.764457334 +0000 UTC m=+911.261636391" Dec 13 07:10:02 crc kubenswrapper[4707]: I1213 07:10:02.780178 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-x6nvs" podStartSLOduration=1.7800772980000001 podStartE2EDuration="8.780156853s" podCreationTimestamp="2025-12-13 07:09:54 +0000 UTC" firstStartedPulling="2025-12-13 07:09:55.083281711 +0000 UTC m=+903.580460758" lastFinishedPulling="2025-12-13 07:10:02.083361266 +0000 UTC m=+910.580540313" observedRunningTime="2025-12-13 07:10:02.778974967 +0000 UTC m=+911.276154014" watchObservedRunningTime="2025-12-13 07:10:02.780156853 +0000 UTC m=+911.277335900" Dec 13 07:10:03 crc kubenswrapper[4707]: I1213 07:10:03.756199 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-x6nvs" Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.394953 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wg6dr"] Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.395745 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="ovn-controller" containerID="cri-o://767ec9824a1b0a79b27ddbfacf8a6b826f923cacae2e04828df966a96703f4eb" gracePeriod=30 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.395891 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="kube-rbac-proxy-node" containerID="cri-o://230d17f3f2abe60964e8980445b9843d73950af2664a639da41c7afe49cc4d07" gracePeriod=30 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.395899 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="nbdb" containerID="cri-o://9ef491ec513059679af28fc2fb363e749b4299a9d2e73359ea9bc7e2ac9b155d" gracePeriod=30 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.395971 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="northd" containerID="cri-o://a57d7bf1733c9f6cbd1a3a4097fc43695cedac944f8b58559ed648f670ec467e" gracePeriod=30 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.395948 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="ovn-acl-logging" containerID="cri-o://1e88ec0cfadd022acfdd5c85374f6d815597557529db009d9347695612d8a8d9" gracePeriod=30 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.395899 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://70fb495b484807ae9dfc81f2ac8873535d393248a9347f981767742b33fa8e87" gracePeriod=30 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.396708 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="sbdb" containerID="cri-o://49e3de166d57b13c2482a740c648ea5a0e884f61418cb7d2ef7eb8699aac2db5" gracePeriod=30 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.462896 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="ovnkube-controller" containerID="cri-o://435016b9a8aedf8c26700b1279b7dc38ded11ab6c7d147f5cc133c3a05b27e9e" gracePeriod=30 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.764951 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg6dr_75175fa9-a1b3-470e-a782-8edf5b3d464b/ovn-acl-logging/0.log" Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.765997 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg6dr_75175fa9-a1b3-470e-a782-8edf5b3d464b/ovn-controller/0.log" Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.766301 4707 generic.go:334] "Generic (PLEG): container finished" podID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerID="435016b9a8aedf8c26700b1279b7dc38ded11ab6c7d147f5cc133c3a05b27e9e" exitCode=0 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.766325 4707 generic.go:334] "Generic (PLEG): container finished" podID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerID="49e3de166d57b13c2482a740c648ea5a0e884f61418cb7d2ef7eb8699aac2db5" exitCode=0 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.766332 4707 generic.go:334] "Generic (PLEG): container finished" podID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerID="9ef491ec513059679af28fc2fb363e749b4299a9d2e73359ea9bc7e2ac9b155d" exitCode=0 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.766340 4707 generic.go:334] "Generic (PLEG): container finished" podID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerID="a57d7bf1733c9f6cbd1a3a4097fc43695cedac944f8b58559ed648f670ec467e" exitCode=0 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.766347 4707 generic.go:334] "Generic (PLEG): container finished" podID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerID="70fb495b484807ae9dfc81f2ac8873535d393248a9347f981767742b33fa8e87" exitCode=0 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.766353 4707 generic.go:334] "Generic (PLEG): container finished" podID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerID="230d17f3f2abe60964e8980445b9843d73950af2664a639da41c7afe49cc4d07" exitCode=0 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.766360 4707 generic.go:334] "Generic (PLEG): container finished" podID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerID="1e88ec0cfadd022acfdd5c85374f6d815597557529db009d9347695612d8a8d9" exitCode=143 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.766367 4707 generic.go:334] "Generic (PLEG): container finished" podID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerID="767ec9824a1b0a79b27ddbfacf8a6b826f923cacae2e04828df966a96703f4eb" exitCode=143 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.766401 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerDied","Data":"435016b9a8aedf8c26700b1279b7dc38ded11ab6c7d147f5cc133c3a05b27e9e"} Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.766429 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerDied","Data":"49e3de166d57b13c2482a740c648ea5a0e884f61418cb7d2ef7eb8699aac2db5"} Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.766440 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerDied","Data":"9ef491ec513059679af28fc2fb363e749b4299a9d2e73359ea9bc7e2ac9b155d"} Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.766449 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerDied","Data":"a57d7bf1733c9f6cbd1a3a4097fc43695cedac944f8b58559ed648f670ec467e"} Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.766459 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerDied","Data":"70fb495b484807ae9dfc81f2ac8873535d393248a9347f981767742b33fa8e87"} Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.766468 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerDied","Data":"230d17f3f2abe60964e8980445b9843d73950af2664a639da41c7afe49cc4d07"} Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.766477 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerDied","Data":"1e88ec0cfadd022acfdd5c85374f6d815597557529db009d9347695612d8a8d9"} Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.766485 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerDied","Data":"767ec9824a1b0a79b27ddbfacf8a6b826f923cacae2e04828df966a96703f4eb"} Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.768023 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kmn45_c99dbcda-959f-4e7c-80ef-ba6f268ee22d/kube-multus/0.log" Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.768052 4707 generic.go:334] "Generic (PLEG): container finished" podID="c99dbcda-959f-4e7c-80ef-ba6f268ee22d" containerID="5782049297d448b980c5e8cb63156bb34c8a95006091dc4e2437d3cc88913849" exitCode=2 Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.768580 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kmn45" event={"ID":"c99dbcda-959f-4e7c-80ef-ba6f268ee22d","Type":"ContainerDied","Data":"5782049297d448b980c5e8cb63156bb34c8a95006091dc4e2437d3cc88913849"} Dec 13 07:10:04 crc kubenswrapper[4707]: I1213 07:10:04.768809 4707 scope.go:117] "RemoveContainer" containerID="5782049297d448b980c5e8cb63156bb34c8a95006091dc4e2437d3cc88913849" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.106732 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg6dr_75175fa9-a1b3-470e-a782-8edf5b3d464b/ovn-acl-logging/0.log" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.107690 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg6dr_75175fa9-a1b3-470e-a782-8edf5b3d464b/ovn-controller/0.log" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.108251 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.154819 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-run-netns\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.154873 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-systemd-units\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.154904 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.154931 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-ovn\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.154960 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-cni-bin\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.154969 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155011 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovnkube-config\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.154964 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155000 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155006 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155012 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155059 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-log-socket\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155078 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-log-socket" (OuterVolumeSpecName: "log-socket") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155106 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-cni-netd\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155146 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-systemd\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155186 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-node-log\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155186 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155216 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-env-overrides\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155266 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-kubelet\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155292 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovn-node-metrics-cert\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155313 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovnkube-script-lib\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155335 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-var-lib-openvswitch\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155365 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-openvswitch\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155389 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-etc-openvswitch\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155427 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-slash\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155461 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155468 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjvt2\" (UniqueName: \"kubernetes.io/projected/75175fa9-a1b3-470e-a782-8edf5b3d464b-kube-api-access-rjvt2\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155502 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-run-ovn-kubernetes\") pod \"75175fa9-a1b3-470e-a782-8edf5b3d464b\" (UID: \"75175fa9-a1b3-470e-a782-8edf5b3d464b\") " Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155798 4707 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-log-socket\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155813 4707 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155825 4707 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155835 4707 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155846 4707 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155859 4707 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155871 4707 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155882 4707 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155912 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155941 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.155987 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.156024 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.156302 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.156332 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-node-log" (OuterVolumeSpecName: "node-log") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.156356 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.156378 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-slash" (OuterVolumeSpecName: "host-slash") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.156426 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.161408 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.168691 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gsffn"] Dec 13 07:10:05 crc kubenswrapper[4707]: E1213 07:10:05.168965 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="ovn-acl-logging" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169006 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="ovn-acl-logging" Dec 13 07:10:05 crc kubenswrapper[4707]: E1213 07:10:05.169023 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="kube-rbac-proxy-ovn-metrics" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169031 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="kube-rbac-proxy-ovn-metrics" Dec 13 07:10:05 crc kubenswrapper[4707]: E1213 07:10:05.169046 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="ovn-controller" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169056 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="ovn-controller" Dec 13 07:10:05 crc kubenswrapper[4707]: E1213 07:10:05.169067 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="kube-rbac-proxy-node" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169076 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="kube-rbac-proxy-node" Dec 13 07:10:05 crc kubenswrapper[4707]: E1213 07:10:05.169087 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="ovnkube-controller" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169096 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="ovnkube-controller" Dec 13 07:10:05 crc kubenswrapper[4707]: E1213 07:10:05.169107 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="nbdb" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169115 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="nbdb" Dec 13 07:10:05 crc kubenswrapper[4707]: E1213 07:10:05.169132 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="sbdb" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169141 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="sbdb" Dec 13 07:10:05 crc kubenswrapper[4707]: E1213 07:10:05.169151 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="northd" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169158 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="northd" Dec 13 07:10:05 crc kubenswrapper[4707]: E1213 07:10:05.169168 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="kubecfg-setup" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169175 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="kubecfg-setup" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169282 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="ovn-acl-logging" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169372 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="northd" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169384 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="sbdb" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169395 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="nbdb" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169404 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="ovn-controller" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169411 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="kube-rbac-proxy-ovn-metrics" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169423 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="ovnkube-controller" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169432 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" containerName="kube-rbac-proxy-node" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.169520 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75175fa9-a1b3-470e-a782-8edf5b3d464b-kube-api-access-rjvt2" (OuterVolumeSpecName: "kube-api-access-rjvt2") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "kube-api-access-rjvt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.170514 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "75175fa9-a1b3-470e-a782-8edf5b3d464b" (UID: "75175fa9-a1b3-470e-a782-8edf5b3d464b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.172015 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257037 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-etc-openvswitch\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257093 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-kubelet\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257141 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-892zd\" (UniqueName: \"kubernetes.io/projected/af354b52-1382-4d70-9cfa-91917df4183a-kube-api-access-892zd\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257168 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-node-log\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257182 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af354b52-1382-4d70-9cfa-91917df4183a-ovnkube-config\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257209 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-cni-netd\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257235 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-log-socket\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257254 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-var-lib-openvswitch\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257277 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-cni-bin\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257298 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af354b52-1382-4d70-9cfa-91917df4183a-ovn-node-metrics-cert\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257348 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-run-openvswitch\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257374 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-run-systemd\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257396 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af354b52-1382-4d70-9cfa-91917df4183a-env-overrides\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257420 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-systemd-units\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257441 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-run-ovn-kubernetes\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257471 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-run-ovn\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257488 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-run-netns\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257504 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-slash\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257519 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257538 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af354b52-1382-4d70-9cfa-91917df4183a-ovnkube-script-lib\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257589 4707 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-node-log\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257600 4707 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257610 4707 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257618 4707 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257626 4707 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/75175fa9-a1b3-470e-a782-8edf5b3d464b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257635 4707 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257643 4707 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257651 4707 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257659 4707 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-slash\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257669 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjvt2\" (UniqueName: \"kubernetes.io/projected/75175fa9-a1b3-470e-a782-8edf5b3d464b-kube-api-access-rjvt2\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257677 4707 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.257686 4707 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/75175fa9-a1b3-470e-a782-8edf5b3d464b-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.358909 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af354b52-1382-4d70-9cfa-91917df4183a-env-overrides\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.358959 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-systemd-units\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.358998 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-run-ovn-kubernetes\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359023 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-run-ovn\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359037 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-systemd-units\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359068 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-run-netns\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359045 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-run-netns\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359095 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-run-ovn\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359123 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-run-ovn-kubernetes\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359141 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-slash\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359278 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-slash\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359299 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359279 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359348 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af354b52-1382-4d70-9cfa-91917df4183a-ovnkube-script-lib\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359384 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-etc-openvswitch\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359406 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-kubelet\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359435 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-892zd\" (UniqueName: \"kubernetes.io/projected/af354b52-1382-4d70-9cfa-91917df4183a-kube-api-access-892zd\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359489 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-etc-openvswitch\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359524 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-kubelet\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359547 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-node-log\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359578 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-node-log\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359581 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af354b52-1382-4d70-9cfa-91917df4183a-ovnkube-config\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359616 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-cni-netd\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359639 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-log-socket\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359661 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-var-lib-openvswitch\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359691 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-cni-bin\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359702 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af354b52-1382-4d70-9cfa-91917df4183a-env-overrides\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359722 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af354b52-1382-4d70-9cfa-91917df4183a-ovn-node-metrics-cert\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359746 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-run-openvswitch\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359753 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-log-socket\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359768 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-run-systemd\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359783 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-cni-bin\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359795 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-host-cni-netd\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359843 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-var-lib-openvswitch\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359897 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-run-openvswitch\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.359919 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/af354b52-1382-4d70-9cfa-91917df4183a-run-systemd\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.360543 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af354b52-1382-4d70-9cfa-91917df4183a-ovnkube-config\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.360847 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af354b52-1382-4d70-9cfa-91917df4183a-ovnkube-script-lib\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.364632 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af354b52-1382-4d70-9cfa-91917df4183a-ovn-node-metrics-cert\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.378858 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-892zd\" (UniqueName: \"kubernetes.io/projected/af354b52-1382-4d70-9cfa-91917df4183a-kube-api-access-892zd\") pod \"ovnkube-node-gsffn\" (UID: \"af354b52-1382-4d70-9cfa-91917df4183a\") " pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.483752 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:05 crc kubenswrapper[4707]: W1213 07:10:05.501087 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf354b52_1382_4d70_9cfa_91917df4183a.slice/crio-a17aab08591121ac67918d4e8028c6fac7446993ea0684935a9df099ec1e87c6 WatchSource:0}: Error finding container a17aab08591121ac67918d4e8028c6fac7446993ea0684935a9df099ec1e87c6: Status 404 returned error can't find the container with id a17aab08591121ac67918d4e8028c6fac7446993ea0684935a9df099ec1e87c6 Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.786421 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg6dr_75175fa9-a1b3-470e-a782-8edf5b3d464b/ovn-acl-logging/0.log" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.787391 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wg6dr_75175fa9-a1b3-470e-a782-8edf5b3d464b/ovn-controller/0.log" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.787924 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" event={"ID":"75175fa9-a1b3-470e-a782-8edf5b3d464b","Type":"ContainerDied","Data":"cba0fc6650df7a6812c775a8b9e925939bbf1f3848e30a1e9358f2bbfde32f11"} Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.787988 4707 scope.go:117] "RemoveContainer" containerID="435016b9a8aedf8c26700b1279b7dc38ded11ab6c7d147f5cc133c3a05b27e9e" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.787998 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wg6dr" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.791409 4707 generic.go:334] "Generic (PLEG): container finished" podID="af354b52-1382-4d70-9cfa-91917df4183a" containerID="697d20a008b88523a58d83edf8ad51da95118446813e27f83952143f7d41b8c2" exitCode=0 Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.791492 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" event={"ID":"af354b52-1382-4d70-9cfa-91917df4183a","Type":"ContainerDied","Data":"697d20a008b88523a58d83edf8ad51da95118446813e27f83952143f7d41b8c2"} Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.791556 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" event={"ID":"af354b52-1382-4d70-9cfa-91917df4183a","Type":"ContainerStarted","Data":"a17aab08591121ac67918d4e8028c6fac7446993ea0684935a9df099ec1e87c6"} Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.797207 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kmn45_c99dbcda-959f-4e7c-80ef-ba6f268ee22d/kube-multus/0.log" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.797255 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kmn45" event={"ID":"c99dbcda-959f-4e7c-80ef-ba6f268ee22d","Type":"ContainerStarted","Data":"d0a47439e6859b29378f134d30ef9bcb70c07c944cf0d768bdb91559816ad3b6"} Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.818438 4707 scope.go:117] "RemoveContainer" containerID="49e3de166d57b13c2482a740c648ea5a0e884f61418cb7d2ef7eb8699aac2db5" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.821071 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wg6dr"] Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.825094 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wg6dr"] Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.838313 4707 scope.go:117] "RemoveContainer" containerID="9ef491ec513059679af28fc2fb363e749b4299a9d2e73359ea9bc7e2ac9b155d" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.864914 4707 scope.go:117] "RemoveContainer" containerID="a57d7bf1733c9f6cbd1a3a4097fc43695cedac944f8b58559ed648f670ec467e" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.898569 4707 scope.go:117] "RemoveContainer" containerID="70fb495b484807ae9dfc81f2ac8873535d393248a9347f981767742b33fa8e87" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.915735 4707 scope.go:117] "RemoveContainer" containerID="230d17f3f2abe60964e8980445b9843d73950af2664a639da41c7afe49cc4d07" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.935406 4707 scope.go:117] "RemoveContainer" containerID="1e88ec0cfadd022acfdd5c85374f6d815597557529db009d9347695612d8a8d9" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.948778 4707 scope.go:117] "RemoveContainer" containerID="767ec9824a1b0a79b27ddbfacf8a6b826f923cacae2e04828df966a96703f4eb" Dec 13 07:10:05 crc kubenswrapper[4707]: I1213 07:10:05.966704 4707 scope.go:117] "RemoveContainer" containerID="c3048549c12991cf593cee41418f3ac0b657563bac3eb4a98be4eb1b7fe17258" Dec 13 07:10:06 crc kubenswrapper[4707]: I1213 07:10:06.812557 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" event={"ID":"af354b52-1382-4d70-9cfa-91917df4183a","Type":"ContainerStarted","Data":"48f288084314e6063b0076c2c3d5fc02c5dd1114d97aea1a53e9eca68591f9ef"} Dec 13 07:10:06 crc kubenswrapper[4707]: I1213 07:10:06.812894 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" event={"ID":"af354b52-1382-4d70-9cfa-91917df4183a","Type":"ContainerStarted","Data":"616a088da74c4318402a95f0bd680df969d071123007d96784ee43f62e78d06b"} Dec 13 07:10:06 crc kubenswrapper[4707]: I1213 07:10:06.812905 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" event={"ID":"af354b52-1382-4d70-9cfa-91917df4183a","Type":"ContainerStarted","Data":"653da96c2d84522d15ebe6d80310b084093221329b1930f322c6487c1170ef31"} Dec 13 07:10:06 crc kubenswrapper[4707]: I1213 07:10:06.812915 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" event={"ID":"af354b52-1382-4d70-9cfa-91917df4183a","Type":"ContainerStarted","Data":"f49846ebbfc374b9205677fd9f287d1e844ba8c005752898975ab1a2e6d65ff9"} Dec 13 07:10:06 crc kubenswrapper[4707]: I1213 07:10:06.812924 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" event={"ID":"af354b52-1382-4d70-9cfa-91917df4183a","Type":"ContainerStarted","Data":"dbaf8848f3bb55d3b8fc592cc683f9bc854ff19440b4133ae672379b6d5b6217"} Dec 13 07:10:06 crc kubenswrapper[4707]: I1213 07:10:06.812932 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" event={"ID":"af354b52-1382-4d70-9cfa-91917df4183a","Type":"ContainerStarted","Data":"f5fdc38d102d7b4051f1354ff40938f98e291c32fec1a4f9a460ce2a23e37293"} Dec 13 07:10:07 crc kubenswrapper[4707]: I1213 07:10:07.754758 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75175fa9-a1b3-470e-a782-8edf5b3d464b" path="/var/lib/kubelet/pods/75175fa9-a1b3-470e-a782-8edf5b3d464b/volumes" Dec 13 07:10:08 crc kubenswrapper[4707]: I1213 07:10:08.830674 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" event={"ID":"af354b52-1382-4d70-9cfa-91917df4183a","Type":"ContainerStarted","Data":"88508398aa444f613bc2694da3c4cd31d7f95387044e0e57b4e212d6aaeb2c10"} Dec 13 07:10:09 crc kubenswrapper[4707]: I1213 07:10:09.540852 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-x6nvs" Dec 13 07:10:11 crc kubenswrapper[4707]: I1213 07:10:11.852459 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" event={"ID":"af354b52-1382-4d70-9cfa-91917df4183a","Type":"ContainerStarted","Data":"f7e5bee0894916790d5e29e0198cbde1a61f2618566b8754902a09c50b71e48b"} Dec 13 07:10:12 crc kubenswrapper[4707]: I1213 07:10:12.858378 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:12 crc kubenswrapper[4707]: I1213 07:10:12.858526 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:12 crc kubenswrapper[4707]: I1213 07:10:12.858636 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:12 crc kubenswrapper[4707]: I1213 07:10:12.908122 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:12 crc kubenswrapper[4707]: I1213 07:10:12.975471 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:12 crc kubenswrapper[4707]: I1213 07:10:12.990192 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" podStartSLOduration=7.990175674 podStartE2EDuration="7.990175674s" podCreationTimestamp="2025-12-13 07:10:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 07:10:12.906336044 +0000 UTC m=+921.403515111" watchObservedRunningTime="2025-12-13 07:10:12.990175674 +0000 UTC m=+921.487354721" Dec 13 07:10:35 crc kubenswrapper[4707]: I1213 07:10:35.509382 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gsffn" Dec 13 07:10:49 crc kubenswrapper[4707]: I1213 07:10:49.970055 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zrhlq"] Dec 13 07:10:49 crc kubenswrapper[4707]: I1213 07:10:49.971463 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:10:49 crc kubenswrapper[4707]: I1213 07:10:49.989911 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zrhlq"] Dec 13 07:10:50 crc kubenswrapper[4707]: I1213 07:10:50.050338 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv4tt\" (UniqueName: \"kubernetes.io/projected/61ff56e7-18ac-4dec-8d72-5d0d53a06805-kube-api-access-pv4tt\") pod \"certified-operators-zrhlq\" (UID: \"61ff56e7-18ac-4dec-8d72-5d0d53a06805\") " pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:10:50 crc kubenswrapper[4707]: I1213 07:10:50.050418 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ff56e7-18ac-4dec-8d72-5d0d53a06805-utilities\") pod \"certified-operators-zrhlq\" (UID: \"61ff56e7-18ac-4dec-8d72-5d0d53a06805\") " pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:10:50 crc kubenswrapper[4707]: I1213 07:10:50.050445 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ff56e7-18ac-4dec-8d72-5d0d53a06805-catalog-content\") pod \"certified-operators-zrhlq\" (UID: \"61ff56e7-18ac-4dec-8d72-5d0d53a06805\") " pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:10:50 crc kubenswrapper[4707]: I1213 07:10:50.151732 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ff56e7-18ac-4dec-8d72-5d0d53a06805-utilities\") pod \"certified-operators-zrhlq\" (UID: \"61ff56e7-18ac-4dec-8d72-5d0d53a06805\") " pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:10:50 crc kubenswrapper[4707]: I1213 07:10:50.151792 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ff56e7-18ac-4dec-8d72-5d0d53a06805-catalog-content\") pod \"certified-operators-zrhlq\" (UID: \"61ff56e7-18ac-4dec-8d72-5d0d53a06805\") " pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:10:50 crc kubenswrapper[4707]: I1213 07:10:50.151848 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv4tt\" (UniqueName: \"kubernetes.io/projected/61ff56e7-18ac-4dec-8d72-5d0d53a06805-kube-api-access-pv4tt\") pod \"certified-operators-zrhlq\" (UID: \"61ff56e7-18ac-4dec-8d72-5d0d53a06805\") " pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:10:50 crc kubenswrapper[4707]: I1213 07:10:50.152430 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ff56e7-18ac-4dec-8d72-5d0d53a06805-catalog-content\") pod \"certified-operators-zrhlq\" (UID: \"61ff56e7-18ac-4dec-8d72-5d0d53a06805\") " pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:10:50 crc kubenswrapper[4707]: I1213 07:10:50.153029 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ff56e7-18ac-4dec-8d72-5d0d53a06805-utilities\") pod \"certified-operators-zrhlq\" (UID: \"61ff56e7-18ac-4dec-8d72-5d0d53a06805\") " pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:10:50 crc kubenswrapper[4707]: I1213 07:10:50.175976 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv4tt\" (UniqueName: \"kubernetes.io/projected/61ff56e7-18ac-4dec-8d72-5d0d53a06805-kube-api-access-pv4tt\") pod \"certified-operators-zrhlq\" (UID: \"61ff56e7-18ac-4dec-8d72-5d0d53a06805\") " pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:10:50 crc kubenswrapper[4707]: I1213 07:10:50.288981 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:10:50 crc kubenswrapper[4707]: I1213 07:10:50.584081 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zrhlq"] Dec 13 07:10:51 crc kubenswrapper[4707]: I1213 07:10:51.135673 4707 generic.go:334] "Generic (PLEG): container finished" podID="61ff56e7-18ac-4dec-8d72-5d0d53a06805" containerID="2b228ccbe56cfb10abd900aa9364bc9fe3c45f8c15e10969d8fbb02e93485f4e" exitCode=0 Dec 13 07:10:51 crc kubenswrapper[4707]: I1213 07:10:51.135773 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zrhlq" event={"ID":"61ff56e7-18ac-4dec-8d72-5d0d53a06805","Type":"ContainerDied","Data":"2b228ccbe56cfb10abd900aa9364bc9fe3c45f8c15e10969d8fbb02e93485f4e"} Dec 13 07:10:51 crc kubenswrapper[4707]: I1213 07:10:51.136024 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zrhlq" event={"ID":"61ff56e7-18ac-4dec-8d72-5d0d53a06805","Type":"ContainerStarted","Data":"6ae497957f3f7985d11e14712e06bbd513e4c2d2857edb0341fb71a1e770e176"} Dec 13 07:10:53 crc kubenswrapper[4707]: I1213 07:10:53.150728 4707 generic.go:334] "Generic (PLEG): container finished" podID="61ff56e7-18ac-4dec-8d72-5d0d53a06805" containerID="c22a2ec2e2e6f145c186942fa25d452691b7f1ebab6e3ed53df162c410b56740" exitCode=0 Dec 13 07:10:53 crc kubenswrapper[4707]: I1213 07:10:53.150833 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zrhlq" event={"ID":"61ff56e7-18ac-4dec-8d72-5d0d53a06805","Type":"ContainerDied","Data":"c22a2ec2e2e6f145c186942fa25d452691b7f1ebab6e3ed53df162c410b56740"} Dec 13 07:10:54 crc kubenswrapper[4707]: I1213 07:10:54.161798 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zrhlq" event={"ID":"61ff56e7-18ac-4dec-8d72-5d0d53a06805","Type":"ContainerStarted","Data":"b7809ef45281f2c6d34d856120ee88920af722243fa381fcbd43db6f0b8abfc5"} Dec 13 07:10:54 crc kubenswrapper[4707]: I1213 07:10:54.189115 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zrhlq" podStartSLOduration=2.590947435 podStartE2EDuration="5.189098461s" podCreationTimestamp="2025-12-13 07:10:49 +0000 UTC" firstStartedPulling="2025-12-13 07:10:51.13756182 +0000 UTC m=+959.634740877" lastFinishedPulling="2025-12-13 07:10:53.735712836 +0000 UTC m=+962.232891903" observedRunningTime="2025-12-13 07:10:54.186016504 +0000 UTC m=+962.683195551" watchObservedRunningTime="2025-12-13 07:10:54.189098461 +0000 UTC m=+962.686277508" Dec 13 07:10:54 crc kubenswrapper[4707]: I1213 07:10:54.749461 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-knccv"] Dec 13 07:10:54 crc kubenswrapper[4707]: I1213 07:10:54.750635 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:10:54 crc kubenswrapper[4707]: I1213 07:10:54.766189 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-knccv"] Dec 13 07:10:54 crc kubenswrapper[4707]: I1213 07:10:54.833713 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb858\" (UniqueName: \"kubernetes.io/projected/3d3047c2-079d-4e46-87df-3c6700204b8a-kube-api-access-zb858\") pod \"redhat-operators-knccv\" (UID: \"3d3047c2-079d-4e46-87df-3c6700204b8a\") " pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:10:54 crc kubenswrapper[4707]: I1213 07:10:54.833753 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d3047c2-079d-4e46-87df-3c6700204b8a-catalog-content\") pod \"redhat-operators-knccv\" (UID: \"3d3047c2-079d-4e46-87df-3c6700204b8a\") " pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:10:54 crc kubenswrapper[4707]: I1213 07:10:54.833803 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d3047c2-079d-4e46-87df-3c6700204b8a-utilities\") pod \"redhat-operators-knccv\" (UID: \"3d3047c2-079d-4e46-87df-3c6700204b8a\") " pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:10:54 crc kubenswrapper[4707]: I1213 07:10:54.935033 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d3047c2-079d-4e46-87df-3c6700204b8a-utilities\") pod \"redhat-operators-knccv\" (UID: \"3d3047c2-079d-4e46-87df-3c6700204b8a\") " pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:10:54 crc kubenswrapper[4707]: I1213 07:10:54.935146 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb858\" (UniqueName: \"kubernetes.io/projected/3d3047c2-079d-4e46-87df-3c6700204b8a-kube-api-access-zb858\") pod \"redhat-operators-knccv\" (UID: \"3d3047c2-079d-4e46-87df-3c6700204b8a\") " pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:10:54 crc kubenswrapper[4707]: I1213 07:10:54.935168 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d3047c2-079d-4e46-87df-3c6700204b8a-catalog-content\") pod \"redhat-operators-knccv\" (UID: \"3d3047c2-079d-4e46-87df-3c6700204b8a\") " pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:10:54 crc kubenswrapper[4707]: I1213 07:10:54.935816 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d3047c2-079d-4e46-87df-3c6700204b8a-catalog-content\") pod \"redhat-operators-knccv\" (UID: \"3d3047c2-079d-4e46-87df-3c6700204b8a\") " pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:10:54 crc kubenswrapper[4707]: I1213 07:10:54.935880 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d3047c2-079d-4e46-87df-3c6700204b8a-utilities\") pod \"redhat-operators-knccv\" (UID: \"3d3047c2-079d-4e46-87df-3c6700204b8a\") " pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:10:54 crc kubenswrapper[4707]: I1213 07:10:54.960418 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb858\" (UniqueName: \"kubernetes.io/projected/3d3047c2-079d-4e46-87df-3c6700204b8a-kube-api-access-zb858\") pod \"redhat-operators-knccv\" (UID: \"3d3047c2-079d-4e46-87df-3c6700204b8a\") " pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:10:55 crc kubenswrapper[4707]: I1213 07:10:55.078850 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:10:55 crc kubenswrapper[4707]: I1213 07:10:55.300037 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-knccv"] Dec 13 07:10:55 crc kubenswrapper[4707]: W1213 07:10:55.308520 4707 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d3047c2_079d_4e46_87df_3c6700204b8a.slice/crio-e1915044d64624d31c7870f7ab0619b0653525319b85f7ebc0f8d8c9974e672d WatchSource:0}: Error finding container e1915044d64624d31c7870f7ab0619b0653525319b85f7ebc0f8d8c9974e672d: Status 404 returned error can't find the container with id e1915044d64624d31c7870f7ab0619b0653525319b85f7ebc0f8d8c9974e672d Dec 13 07:10:56 crc kubenswrapper[4707]: I1213 07:10:56.187221 4707 generic.go:334] "Generic (PLEG): container finished" podID="3d3047c2-079d-4e46-87df-3c6700204b8a" containerID="07753965cb3769c93beb6143ce3c1ea9107ef656f6195ab259a026b64b825f1e" exitCode=0 Dec 13 07:10:56 crc kubenswrapper[4707]: I1213 07:10:56.187361 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knccv" event={"ID":"3d3047c2-079d-4e46-87df-3c6700204b8a","Type":"ContainerDied","Data":"07753965cb3769c93beb6143ce3c1ea9107ef656f6195ab259a026b64b825f1e"} Dec 13 07:10:56 crc kubenswrapper[4707]: I1213 07:10:56.187505 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knccv" event={"ID":"3d3047c2-079d-4e46-87df-3c6700204b8a","Type":"ContainerStarted","Data":"e1915044d64624d31c7870f7ab0619b0653525319b85f7ebc0f8d8c9974e672d"} Dec 13 07:10:58 crc kubenswrapper[4707]: I1213 07:10:58.199720 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knccv" event={"ID":"3d3047c2-079d-4e46-87df-3c6700204b8a","Type":"ContainerStarted","Data":"ed62f3547a1a87fce71fccd3563efe533e16bccc65b463ed8f05347e858e95fd"} Dec 13 07:10:59 crc kubenswrapper[4707]: I1213 07:10:59.207709 4707 generic.go:334] "Generic (PLEG): container finished" podID="3d3047c2-079d-4e46-87df-3c6700204b8a" containerID="ed62f3547a1a87fce71fccd3563efe533e16bccc65b463ed8f05347e858e95fd" exitCode=0 Dec 13 07:10:59 crc kubenswrapper[4707]: I1213 07:10:59.207785 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knccv" event={"ID":"3d3047c2-079d-4e46-87df-3c6700204b8a","Type":"ContainerDied","Data":"ed62f3547a1a87fce71fccd3563efe533e16bccc65b463ed8f05347e858e95fd"} Dec 13 07:10:59 crc kubenswrapper[4707]: I1213 07:10:59.532451 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jjzdd/must-gather-wzvvr"] Dec 13 07:10:59 crc kubenswrapper[4707]: I1213 07:10:59.537735 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jjzdd/must-gather-wzvvr" Dec 13 07:10:59 crc kubenswrapper[4707]: I1213 07:10:59.559479 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jjzdd"/"kube-root-ca.crt" Dec 13 07:10:59 crc kubenswrapper[4707]: I1213 07:10:59.559711 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jjzdd"/"openshift-service-ca.crt" Dec 13 07:10:59 crc kubenswrapper[4707]: I1213 07:10:59.604056 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96z6f\" (UniqueName: \"kubernetes.io/projected/9f643d5c-42de-4c94-a3c8-d5daeda3363c-kube-api-access-96z6f\") pod \"must-gather-wzvvr\" (UID: \"9f643d5c-42de-4c94-a3c8-d5daeda3363c\") " pod="openshift-must-gather-jjzdd/must-gather-wzvvr" Dec 13 07:10:59 crc kubenswrapper[4707]: I1213 07:10:59.604154 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9f643d5c-42de-4c94-a3c8-d5daeda3363c-must-gather-output\") pod \"must-gather-wzvvr\" (UID: \"9f643d5c-42de-4c94-a3c8-d5daeda3363c\") " pod="openshift-must-gather-jjzdd/must-gather-wzvvr" Dec 13 07:10:59 crc kubenswrapper[4707]: I1213 07:10:59.621647 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jjzdd/must-gather-wzvvr"] Dec 13 07:10:59 crc kubenswrapper[4707]: I1213 07:10:59.705632 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96z6f\" (UniqueName: \"kubernetes.io/projected/9f643d5c-42de-4c94-a3c8-d5daeda3363c-kube-api-access-96z6f\") pod \"must-gather-wzvvr\" (UID: \"9f643d5c-42de-4c94-a3c8-d5daeda3363c\") " pod="openshift-must-gather-jjzdd/must-gather-wzvvr" Dec 13 07:10:59 crc kubenswrapper[4707]: I1213 07:10:59.705743 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9f643d5c-42de-4c94-a3c8-d5daeda3363c-must-gather-output\") pod \"must-gather-wzvvr\" (UID: \"9f643d5c-42de-4c94-a3c8-d5daeda3363c\") " pod="openshift-must-gather-jjzdd/must-gather-wzvvr" Dec 13 07:10:59 crc kubenswrapper[4707]: I1213 07:10:59.706342 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9f643d5c-42de-4c94-a3c8-d5daeda3363c-must-gather-output\") pod \"must-gather-wzvvr\" (UID: \"9f643d5c-42de-4c94-a3c8-d5daeda3363c\") " pod="openshift-must-gather-jjzdd/must-gather-wzvvr" Dec 13 07:10:59 crc kubenswrapper[4707]: I1213 07:10:59.738665 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96z6f\" (UniqueName: \"kubernetes.io/projected/9f643d5c-42de-4c94-a3c8-d5daeda3363c-kube-api-access-96z6f\") pod \"must-gather-wzvvr\" (UID: \"9f643d5c-42de-4c94-a3c8-d5daeda3363c\") " pod="openshift-must-gather-jjzdd/must-gather-wzvvr" Dec 13 07:10:59 crc kubenswrapper[4707]: I1213 07:10:59.876387 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jjzdd/must-gather-wzvvr" Dec 13 07:11:00 crc kubenswrapper[4707]: I1213 07:11:00.074261 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jjzdd/must-gather-wzvvr"] Dec 13 07:11:00 crc kubenswrapper[4707]: I1213 07:11:00.216561 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jjzdd/must-gather-wzvvr" event={"ID":"9f643d5c-42de-4c94-a3c8-d5daeda3363c","Type":"ContainerStarted","Data":"9ea300743042e8646de07f98a417e829ac7db3193c463803702669e0e0d8c28f"} Dec 13 07:11:00 crc kubenswrapper[4707]: I1213 07:11:00.289610 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:11:00 crc kubenswrapper[4707]: I1213 07:11:00.289729 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:11:00 crc kubenswrapper[4707]: I1213 07:11:00.342326 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:11:01 crc kubenswrapper[4707]: I1213 07:11:01.224093 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knccv" event={"ID":"3d3047c2-079d-4e46-87df-3c6700204b8a","Type":"ContainerStarted","Data":"b5fa0e7ede5b89525600081d7dc96a6e8daf413ac05630436cb32f7fca386aae"} Dec 13 07:11:01 crc kubenswrapper[4707]: I1213 07:11:01.273538 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:11:01 crc kubenswrapper[4707]: I1213 07:11:01.302177 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-knccv" podStartSLOduration=3.29314129 podStartE2EDuration="7.302152664s" podCreationTimestamp="2025-12-13 07:10:54 +0000 UTC" firstStartedPulling="2025-12-13 07:10:56.189021964 +0000 UTC m=+964.686201001" lastFinishedPulling="2025-12-13 07:11:00.198033298 +0000 UTC m=+968.695212375" observedRunningTime="2025-12-13 07:11:01.246289993 +0000 UTC m=+969.743469040" watchObservedRunningTime="2025-12-13 07:11:01.302152664 +0000 UTC m=+969.799331711" Dec 13 07:11:02 crc kubenswrapper[4707]: I1213 07:11:02.343733 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zrhlq"] Dec 13 07:11:02 crc kubenswrapper[4707]: I1213 07:11:02.954499 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c6k6m"] Dec 13 07:11:02 crc kubenswrapper[4707]: I1213 07:11:02.955558 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:02 crc kubenswrapper[4707]: I1213 07:11:02.972783 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6k6m"] Dec 13 07:11:03 crc kubenswrapper[4707]: I1213 07:11:03.052442 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-utilities\") pod \"redhat-marketplace-c6k6m\" (UID: \"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a\") " pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:03 crc kubenswrapper[4707]: I1213 07:11:03.052549 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mxhd\" (UniqueName: \"kubernetes.io/projected/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-kube-api-access-4mxhd\") pod \"redhat-marketplace-c6k6m\" (UID: \"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a\") " pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:03 crc kubenswrapper[4707]: I1213 07:11:03.052578 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-catalog-content\") pod \"redhat-marketplace-c6k6m\" (UID: \"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a\") " pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:03 crc kubenswrapper[4707]: I1213 07:11:03.154566 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-utilities\") pod \"redhat-marketplace-c6k6m\" (UID: \"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a\") " pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:03 crc kubenswrapper[4707]: I1213 07:11:03.154663 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mxhd\" (UniqueName: \"kubernetes.io/projected/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-kube-api-access-4mxhd\") pod \"redhat-marketplace-c6k6m\" (UID: \"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a\") " pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:03 crc kubenswrapper[4707]: I1213 07:11:03.154686 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-catalog-content\") pod \"redhat-marketplace-c6k6m\" (UID: \"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a\") " pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:03 crc kubenswrapper[4707]: I1213 07:11:03.155036 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-utilities\") pod \"redhat-marketplace-c6k6m\" (UID: \"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a\") " pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:03 crc kubenswrapper[4707]: I1213 07:11:03.155107 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-catalog-content\") pod \"redhat-marketplace-c6k6m\" (UID: \"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a\") " pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:03 crc kubenswrapper[4707]: I1213 07:11:03.178496 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mxhd\" (UniqueName: \"kubernetes.io/projected/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-kube-api-access-4mxhd\") pod \"redhat-marketplace-c6k6m\" (UID: \"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a\") " pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:03 crc kubenswrapper[4707]: I1213 07:11:03.245487 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zrhlq" podUID="61ff56e7-18ac-4dec-8d72-5d0d53a06805" containerName="registry-server" containerID="cri-o://b7809ef45281f2c6d34d856120ee88920af722243fa381fcbd43db6f0b8abfc5" gracePeriod=2 Dec 13 07:11:03 crc kubenswrapper[4707]: I1213 07:11:03.280528 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:05 crc kubenswrapper[4707]: I1213 07:11:05.079240 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:11:05 crc kubenswrapper[4707]: I1213 07:11:05.079575 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:11:06 crc kubenswrapper[4707]: I1213 07:11:06.124532 4707 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-knccv" podUID="3d3047c2-079d-4e46-87df-3c6700204b8a" containerName="registry-server" probeResult="failure" output=< Dec 13 07:11:06 crc kubenswrapper[4707]: timeout: failed to connect service ":50051" within 1s Dec 13 07:11:06 crc kubenswrapper[4707]: > Dec 13 07:11:07 crc kubenswrapper[4707]: I1213 07:11:07.149987 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qrjm6"] Dec 13 07:11:07 crc kubenswrapper[4707]: I1213 07:11:07.152441 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:07 crc kubenswrapper[4707]: I1213 07:11:07.166703 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qrjm6"] Dec 13 07:11:07 crc kubenswrapper[4707]: I1213 07:11:07.313399 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-utilities\") pod \"community-operators-qrjm6\" (UID: \"53cefb4c-c093-4bdc-86c3-4f37db5da1ed\") " pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:07 crc kubenswrapper[4707]: I1213 07:11:07.313451 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-catalog-content\") pod \"community-operators-qrjm6\" (UID: \"53cefb4c-c093-4bdc-86c3-4f37db5da1ed\") " pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:07 crc kubenswrapper[4707]: I1213 07:11:07.313517 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6wdj\" (UniqueName: \"kubernetes.io/projected/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-kube-api-access-m6wdj\") pod \"community-operators-qrjm6\" (UID: \"53cefb4c-c093-4bdc-86c3-4f37db5da1ed\") " pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:07 crc kubenswrapper[4707]: I1213 07:11:07.415058 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-utilities\") pod \"community-operators-qrjm6\" (UID: \"53cefb4c-c093-4bdc-86c3-4f37db5da1ed\") " pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:07 crc kubenswrapper[4707]: I1213 07:11:07.415100 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-catalog-content\") pod \"community-operators-qrjm6\" (UID: \"53cefb4c-c093-4bdc-86c3-4f37db5da1ed\") " pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:07 crc kubenswrapper[4707]: I1213 07:11:07.415125 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6wdj\" (UniqueName: \"kubernetes.io/projected/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-kube-api-access-m6wdj\") pod \"community-operators-qrjm6\" (UID: \"53cefb4c-c093-4bdc-86c3-4f37db5da1ed\") " pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:07 crc kubenswrapper[4707]: I1213 07:11:07.415793 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-utilities\") pod \"community-operators-qrjm6\" (UID: \"53cefb4c-c093-4bdc-86c3-4f37db5da1ed\") " pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:07 crc kubenswrapper[4707]: I1213 07:11:07.416032 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-catalog-content\") pod \"community-operators-qrjm6\" (UID: \"53cefb4c-c093-4bdc-86c3-4f37db5da1ed\") " pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:07 crc kubenswrapper[4707]: I1213 07:11:07.440403 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6wdj\" (UniqueName: \"kubernetes.io/projected/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-kube-api-access-m6wdj\") pod \"community-operators-qrjm6\" (UID: \"53cefb4c-c093-4bdc-86c3-4f37db5da1ed\") " pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:07 crc kubenswrapper[4707]: I1213 07:11:07.470606 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:09 crc kubenswrapper[4707]: I1213 07:11:09.282786 4707 generic.go:334] "Generic (PLEG): container finished" podID="61ff56e7-18ac-4dec-8d72-5d0d53a06805" containerID="b7809ef45281f2c6d34d856120ee88920af722243fa381fcbd43db6f0b8abfc5" exitCode=0 Dec 13 07:11:09 crc kubenswrapper[4707]: I1213 07:11:09.282828 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zrhlq" event={"ID":"61ff56e7-18ac-4dec-8d72-5d0d53a06805","Type":"ContainerDied","Data":"b7809ef45281f2c6d34d856120ee88920af722243fa381fcbd43db6f0b8abfc5"} Dec 13 07:11:10 crc kubenswrapper[4707]: E1213 07:11:10.290630 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7809ef45281f2c6d34d856120ee88920af722243fa381fcbd43db6f0b8abfc5 is running failed: container process not found" containerID="b7809ef45281f2c6d34d856120ee88920af722243fa381fcbd43db6f0b8abfc5" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 07:11:10 crc kubenswrapper[4707]: E1213 07:11:10.291130 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7809ef45281f2c6d34d856120ee88920af722243fa381fcbd43db6f0b8abfc5 is running failed: container process not found" containerID="b7809ef45281f2c6d34d856120ee88920af722243fa381fcbd43db6f0b8abfc5" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 07:11:10 crc kubenswrapper[4707]: E1213 07:11:10.291513 4707 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7809ef45281f2c6d34d856120ee88920af722243fa381fcbd43db6f0b8abfc5 is running failed: container process not found" containerID="b7809ef45281f2c6d34d856120ee88920af722243fa381fcbd43db6f0b8abfc5" cmd=["grpc_health_probe","-addr=:50051"] Dec 13 07:11:10 crc kubenswrapper[4707]: E1213 07:11:10.291567 4707 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7809ef45281f2c6d34d856120ee88920af722243fa381fcbd43db6f0b8abfc5 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-zrhlq" podUID="61ff56e7-18ac-4dec-8d72-5d0d53a06805" containerName="registry-server" Dec 13 07:11:14 crc kubenswrapper[4707]: I1213 07:11:14.606299 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:11:14 crc kubenswrapper[4707]: I1213 07:11:14.707370 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv4tt\" (UniqueName: \"kubernetes.io/projected/61ff56e7-18ac-4dec-8d72-5d0d53a06805-kube-api-access-pv4tt\") pod \"61ff56e7-18ac-4dec-8d72-5d0d53a06805\" (UID: \"61ff56e7-18ac-4dec-8d72-5d0d53a06805\") " Dec 13 07:11:14 crc kubenswrapper[4707]: I1213 07:11:14.707427 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ff56e7-18ac-4dec-8d72-5d0d53a06805-utilities\") pod \"61ff56e7-18ac-4dec-8d72-5d0d53a06805\" (UID: \"61ff56e7-18ac-4dec-8d72-5d0d53a06805\") " Dec 13 07:11:14 crc kubenswrapper[4707]: I1213 07:11:14.707512 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ff56e7-18ac-4dec-8d72-5d0d53a06805-catalog-content\") pod \"61ff56e7-18ac-4dec-8d72-5d0d53a06805\" (UID: \"61ff56e7-18ac-4dec-8d72-5d0d53a06805\") " Dec 13 07:11:14 crc kubenswrapper[4707]: I1213 07:11:14.708368 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61ff56e7-18ac-4dec-8d72-5d0d53a06805-utilities" (OuterVolumeSpecName: "utilities") pod "61ff56e7-18ac-4dec-8d72-5d0d53a06805" (UID: "61ff56e7-18ac-4dec-8d72-5d0d53a06805"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:11:14 crc kubenswrapper[4707]: I1213 07:11:14.714694 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ff56e7-18ac-4dec-8d72-5d0d53a06805-kube-api-access-pv4tt" (OuterVolumeSpecName: "kube-api-access-pv4tt") pod "61ff56e7-18ac-4dec-8d72-5d0d53a06805" (UID: "61ff56e7-18ac-4dec-8d72-5d0d53a06805"). InnerVolumeSpecName "kube-api-access-pv4tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:11:14 crc kubenswrapper[4707]: I1213 07:11:14.770658 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qrjm6"] Dec 13 07:11:14 crc kubenswrapper[4707]: I1213 07:11:14.792940 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6k6m"] Dec 13 07:11:14 crc kubenswrapper[4707]: I1213 07:11:14.798230 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61ff56e7-18ac-4dec-8d72-5d0d53a06805-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61ff56e7-18ac-4dec-8d72-5d0d53a06805" (UID: "61ff56e7-18ac-4dec-8d72-5d0d53a06805"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:11:14 crc kubenswrapper[4707]: I1213 07:11:14.809112 4707 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ff56e7-18ac-4dec-8d72-5d0d53a06805-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 07:11:14 crc kubenswrapper[4707]: I1213 07:11:14.809147 4707 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ff56e7-18ac-4dec-8d72-5d0d53a06805-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 07:11:14 crc kubenswrapper[4707]: I1213 07:11:14.809162 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv4tt\" (UniqueName: \"kubernetes.io/projected/61ff56e7-18ac-4dec-8d72-5d0d53a06805-kube-api-access-pv4tt\") on node \"crc\" DevicePath \"\"" Dec 13 07:11:15 crc kubenswrapper[4707]: I1213 07:11:15.165137 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:11:15 crc kubenswrapper[4707]: I1213 07:11:15.208357 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:11:15 crc kubenswrapper[4707]: I1213 07:11:15.320990 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6k6m" event={"ID":"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a","Type":"ContainerStarted","Data":"943ccf2e6571163b537550bd6106fc51c26e775f0b67c831c900b99ef74bc5f7"} Dec 13 07:11:15 crc kubenswrapper[4707]: I1213 07:11:15.324494 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zrhlq" event={"ID":"61ff56e7-18ac-4dec-8d72-5d0d53a06805","Type":"ContainerDied","Data":"6ae497957f3f7985d11e14712e06bbd513e4c2d2857edb0341fb71a1e770e176"} Dec 13 07:11:15 crc kubenswrapper[4707]: I1213 07:11:15.324524 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zrhlq" Dec 13 07:11:15 crc kubenswrapper[4707]: I1213 07:11:15.325742 4707 scope.go:117] "RemoveContainer" containerID="b7809ef45281f2c6d34d856120ee88920af722243fa381fcbd43db6f0b8abfc5" Dec 13 07:11:15 crc kubenswrapper[4707]: I1213 07:11:15.331448 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrjm6" event={"ID":"53cefb4c-c093-4bdc-86c3-4f37db5da1ed","Type":"ContainerStarted","Data":"71ca4d6924fb7f87eecf4c2fb869a15987c843e2f18b03f529a42f05504b32d6"} Dec 13 07:11:15 crc kubenswrapper[4707]: I1213 07:11:15.331513 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrjm6" event={"ID":"53cefb4c-c093-4bdc-86c3-4f37db5da1ed","Type":"ContainerStarted","Data":"724edb17f42641df96f75c76ece8b8e45d0c5f2485f0dbbed843fa4d9d12f837"} Dec 13 07:11:15 crc kubenswrapper[4707]: I1213 07:11:15.349941 4707 scope.go:117] "RemoveContainer" containerID="c22a2ec2e2e6f145c186942fa25d452691b7f1ebab6e3ed53df162c410b56740" Dec 13 07:11:15 crc kubenswrapper[4707]: I1213 07:11:15.372173 4707 scope.go:117] "RemoveContainer" containerID="2b228ccbe56cfb10abd900aa9364bc9fe3c45f8c15e10969d8fbb02e93485f4e" Dec 13 07:11:15 crc kubenswrapper[4707]: I1213 07:11:15.390222 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zrhlq"] Dec 13 07:11:15 crc kubenswrapper[4707]: I1213 07:11:15.397616 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zrhlq"] Dec 13 07:11:15 crc kubenswrapper[4707]: I1213 07:11:15.752362 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61ff56e7-18ac-4dec-8d72-5d0d53a06805" path="/var/lib/kubelet/pods/61ff56e7-18ac-4dec-8d72-5d0d53a06805/volumes" Dec 13 07:11:16 crc kubenswrapper[4707]: I1213 07:11:16.338224 4707 generic.go:334] "Generic (PLEG): container finished" podID="63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a" containerID="77887145c13a2134ad9236dfe684e9c94945154c1031eda4ed90f45fce59bd42" exitCode=0 Dec 13 07:11:16 crc kubenswrapper[4707]: I1213 07:11:16.338280 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6k6m" event={"ID":"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a","Type":"ContainerDied","Data":"77887145c13a2134ad9236dfe684e9c94945154c1031eda4ed90f45fce59bd42"} Dec 13 07:11:16 crc kubenswrapper[4707]: I1213 07:11:16.344325 4707 generic.go:334] "Generic (PLEG): container finished" podID="53cefb4c-c093-4bdc-86c3-4f37db5da1ed" containerID="71ca4d6924fb7f87eecf4c2fb869a15987c843e2f18b03f529a42f05504b32d6" exitCode=0 Dec 13 07:11:16 crc kubenswrapper[4707]: I1213 07:11:16.344391 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrjm6" event={"ID":"53cefb4c-c093-4bdc-86c3-4f37db5da1ed","Type":"ContainerDied","Data":"71ca4d6924fb7f87eecf4c2fb869a15987c843e2f18b03f529a42f05504b32d6"} Dec 13 07:11:16 crc kubenswrapper[4707]: I1213 07:11:16.345965 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jjzdd/must-gather-wzvvr" event={"ID":"9f643d5c-42de-4c94-a3c8-d5daeda3363c","Type":"ContainerStarted","Data":"461f0e5cde77d4005d93170128958f35b5b542d3203bcfc68f23fa712f293ce3"} Dec 13 07:11:16 crc kubenswrapper[4707]: I1213 07:11:16.346000 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jjzdd/must-gather-wzvvr" event={"ID":"9f643d5c-42de-4c94-a3c8-d5daeda3363c","Type":"ContainerStarted","Data":"011f3a125f875c1620121f868bdab8eb364737cfd144bc5b1b7af20fee0f68f1"} Dec 13 07:11:17 crc kubenswrapper[4707]: I1213 07:11:17.353989 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6k6m" event={"ID":"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a","Type":"ContainerStarted","Data":"57b34bae491e676cbc0340004dddf1aca3d074d993687d57c188cdff908d690c"} Dec 13 07:11:17 crc kubenswrapper[4707]: I1213 07:11:17.356839 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrjm6" event={"ID":"53cefb4c-c093-4bdc-86c3-4f37db5da1ed","Type":"ContainerStarted","Data":"b6bd313a935fd9a13752194a7ff010c79275ee0fdfc46cfcc4d2678fcd0e5725"} Dec 13 07:11:17 crc kubenswrapper[4707]: I1213 07:11:17.372585 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jjzdd/must-gather-wzvvr" podStartSLOduration=3.751686965 podStartE2EDuration="18.372566596s" podCreationTimestamp="2025-12-13 07:10:59 +0000 UTC" firstStartedPulling="2025-12-13 07:11:00.083531095 +0000 UTC m=+968.580710142" lastFinishedPulling="2025-12-13 07:11:14.704410726 +0000 UTC m=+983.201589773" observedRunningTime="2025-12-13 07:11:16.463679784 +0000 UTC m=+984.960858831" watchObservedRunningTime="2025-12-13 07:11:17.372566596 +0000 UTC m=+985.869745643" Dec 13 07:11:17 crc kubenswrapper[4707]: I1213 07:11:17.436768 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-knccv"] Dec 13 07:11:17 crc kubenswrapper[4707]: I1213 07:11:17.437052 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-knccv" podUID="3d3047c2-079d-4e46-87df-3c6700204b8a" containerName="registry-server" containerID="cri-o://b5fa0e7ede5b89525600081d7dc96a6e8daf413ac05630436cb32f7fca386aae" gracePeriod=2 Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.299209 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.362590 4707 generic.go:334] "Generic (PLEG): container finished" podID="63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a" containerID="57b34bae491e676cbc0340004dddf1aca3d074d993687d57c188cdff908d690c" exitCode=0 Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.362664 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6k6m" event={"ID":"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a","Type":"ContainerDied","Data":"57b34bae491e676cbc0340004dddf1aca3d074d993687d57c188cdff908d690c"} Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.365322 4707 generic.go:334] "Generic (PLEG): container finished" podID="53cefb4c-c093-4bdc-86c3-4f37db5da1ed" containerID="b6bd313a935fd9a13752194a7ff010c79275ee0fdfc46cfcc4d2678fcd0e5725" exitCode=0 Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.365384 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrjm6" event={"ID":"53cefb4c-c093-4bdc-86c3-4f37db5da1ed","Type":"ContainerDied","Data":"b6bd313a935fd9a13752194a7ff010c79275ee0fdfc46cfcc4d2678fcd0e5725"} Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.368774 4707 generic.go:334] "Generic (PLEG): container finished" podID="3d3047c2-079d-4e46-87df-3c6700204b8a" containerID="b5fa0e7ede5b89525600081d7dc96a6e8daf413ac05630436cb32f7fca386aae" exitCode=0 Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.368818 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knccv" event={"ID":"3d3047c2-079d-4e46-87df-3c6700204b8a","Type":"ContainerDied","Data":"b5fa0e7ede5b89525600081d7dc96a6e8daf413ac05630436cb32f7fca386aae"} Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.368847 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knccv" event={"ID":"3d3047c2-079d-4e46-87df-3c6700204b8a","Type":"ContainerDied","Data":"e1915044d64624d31c7870f7ab0619b0653525319b85f7ebc0f8d8c9974e672d"} Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.368869 4707 scope.go:117] "RemoveContainer" containerID="b5fa0e7ede5b89525600081d7dc96a6e8daf413ac05630436cb32f7fca386aae" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.368882 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knccv" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.398035 4707 scope.go:117] "RemoveContainer" containerID="ed62f3547a1a87fce71fccd3563efe533e16bccc65b463ed8f05347e858e95fd" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.422858 4707 scope.go:117] "RemoveContainer" containerID="07753965cb3769c93beb6143ce3c1ea9107ef656f6195ab259a026b64b825f1e" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.440512 4707 scope.go:117] "RemoveContainer" containerID="b5fa0e7ede5b89525600081d7dc96a6e8daf413ac05630436cb32f7fca386aae" Dec 13 07:11:18 crc kubenswrapper[4707]: E1213 07:11:18.440896 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5fa0e7ede5b89525600081d7dc96a6e8daf413ac05630436cb32f7fca386aae\": container with ID starting with b5fa0e7ede5b89525600081d7dc96a6e8daf413ac05630436cb32f7fca386aae not found: ID does not exist" containerID="b5fa0e7ede5b89525600081d7dc96a6e8daf413ac05630436cb32f7fca386aae" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.440945 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5fa0e7ede5b89525600081d7dc96a6e8daf413ac05630436cb32f7fca386aae"} err="failed to get container status \"b5fa0e7ede5b89525600081d7dc96a6e8daf413ac05630436cb32f7fca386aae\": rpc error: code = NotFound desc = could not find container \"b5fa0e7ede5b89525600081d7dc96a6e8daf413ac05630436cb32f7fca386aae\": container with ID starting with b5fa0e7ede5b89525600081d7dc96a6e8daf413ac05630436cb32f7fca386aae not found: ID does not exist" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.441055 4707 scope.go:117] "RemoveContainer" containerID="ed62f3547a1a87fce71fccd3563efe533e16bccc65b463ed8f05347e858e95fd" Dec 13 07:11:18 crc kubenswrapper[4707]: E1213 07:11:18.441425 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed62f3547a1a87fce71fccd3563efe533e16bccc65b463ed8f05347e858e95fd\": container with ID starting with ed62f3547a1a87fce71fccd3563efe533e16bccc65b463ed8f05347e858e95fd not found: ID does not exist" containerID="ed62f3547a1a87fce71fccd3563efe533e16bccc65b463ed8f05347e858e95fd" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.441461 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed62f3547a1a87fce71fccd3563efe533e16bccc65b463ed8f05347e858e95fd"} err="failed to get container status \"ed62f3547a1a87fce71fccd3563efe533e16bccc65b463ed8f05347e858e95fd\": rpc error: code = NotFound desc = could not find container \"ed62f3547a1a87fce71fccd3563efe533e16bccc65b463ed8f05347e858e95fd\": container with ID starting with ed62f3547a1a87fce71fccd3563efe533e16bccc65b463ed8f05347e858e95fd not found: ID does not exist" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.441490 4707 scope.go:117] "RemoveContainer" containerID="07753965cb3769c93beb6143ce3c1ea9107ef656f6195ab259a026b64b825f1e" Dec 13 07:11:18 crc kubenswrapper[4707]: E1213 07:11:18.441894 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07753965cb3769c93beb6143ce3c1ea9107ef656f6195ab259a026b64b825f1e\": container with ID starting with 07753965cb3769c93beb6143ce3c1ea9107ef656f6195ab259a026b64b825f1e not found: ID does not exist" containerID="07753965cb3769c93beb6143ce3c1ea9107ef656f6195ab259a026b64b825f1e" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.441929 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07753965cb3769c93beb6143ce3c1ea9107ef656f6195ab259a026b64b825f1e"} err="failed to get container status \"07753965cb3769c93beb6143ce3c1ea9107ef656f6195ab259a026b64b825f1e\": rpc error: code = NotFound desc = could not find container \"07753965cb3769c93beb6143ce3c1ea9107ef656f6195ab259a026b64b825f1e\": container with ID starting with 07753965cb3769c93beb6143ce3c1ea9107ef656f6195ab259a026b64b825f1e not found: ID does not exist" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.466726 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d3047c2-079d-4e46-87df-3c6700204b8a-catalog-content\") pod \"3d3047c2-079d-4e46-87df-3c6700204b8a\" (UID: \"3d3047c2-079d-4e46-87df-3c6700204b8a\") " Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.466829 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d3047c2-079d-4e46-87df-3c6700204b8a-utilities\") pod \"3d3047c2-079d-4e46-87df-3c6700204b8a\" (UID: \"3d3047c2-079d-4e46-87df-3c6700204b8a\") " Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.466873 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb858\" (UniqueName: \"kubernetes.io/projected/3d3047c2-079d-4e46-87df-3c6700204b8a-kube-api-access-zb858\") pod \"3d3047c2-079d-4e46-87df-3c6700204b8a\" (UID: \"3d3047c2-079d-4e46-87df-3c6700204b8a\") " Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.467790 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d3047c2-079d-4e46-87df-3c6700204b8a-utilities" (OuterVolumeSpecName: "utilities") pod "3d3047c2-079d-4e46-87df-3c6700204b8a" (UID: "3d3047c2-079d-4e46-87df-3c6700204b8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.474144 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d3047c2-079d-4e46-87df-3c6700204b8a-kube-api-access-zb858" (OuterVolumeSpecName: "kube-api-access-zb858") pod "3d3047c2-079d-4e46-87df-3c6700204b8a" (UID: "3d3047c2-079d-4e46-87df-3c6700204b8a"). InnerVolumeSpecName "kube-api-access-zb858". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.568196 4707 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d3047c2-079d-4e46-87df-3c6700204b8a-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.568238 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb858\" (UniqueName: \"kubernetes.io/projected/3d3047c2-079d-4e46-87df-3c6700204b8a-kube-api-access-zb858\") on node \"crc\" DevicePath \"\"" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.590510 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d3047c2-079d-4e46-87df-3c6700204b8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d3047c2-079d-4e46-87df-3c6700204b8a" (UID: "3d3047c2-079d-4e46-87df-3c6700204b8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.669602 4707 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d3047c2-079d-4e46-87df-3c6700204b8a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.711928 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-knccv"] Dec 13 07:11:18 crc kubenswrapper[4707]: I1213 07:11:18.715862 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-knccv"] Dec 13 07:11:19 crc kubenswrapper[4707]: I1213 07:11:19.375683 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6k6m" event={"ID":"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a","Type":"ContainerStarted","Data":"962202e656f69a111c6508365131a82c6495030dfce10ae12ca8c24a596c9ea6"} Dec 13 07:11:19 crc kubenswrapper[4707]: I1213 07:11:19.381549 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrjm6" event={"ID":"53cefb4c-c093-4bdc-86c3-4f37db5da1ed","Type":"ContainerStarted","Data":"c491371b991e6c84e438b50417ee1c078b9447e1753f03fe07559f933417aa80"} Dec 13 07:11:19 crc kubenswrapper[4707]: I1213 07:11:19.418970 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c6k6m" podStartSLOduration=14.885955494 podStartE2EDuration="17.418894056s" podCreationTimestamp="2025-12-13 07:11:02 +0000 UTC" firstStartedPulling="2025-12-13 07:11:16.339364667 +0000 UTC m=+984.836543704" lastFinishedPulling="2025-12-13 07:11:18.872303199 +0000 UTC m=+987.369482266" observedRunningTime="2025-12-13 07:11:19.395440681 +0000 UTC m=+987.892619729" watchObservedRunningTime="2025-12-13 07:11:19.418894056 +0000 UTC m=+987.916073103" Dec 13 07:11:19 crc kubenswrapper[4707]: I1213 07:11:19.419533 4707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qrjm6" podStartSLOduration=9.865139546 podStartE2EDuration="12.419527422s" podCreationTimestamp="2025-12-13 07:11:07 +0000 UTC" firstStartedPulling="2025-12-13 07:11:16.34588943 +0000 UTC m=+984.843068477" lastFinishedPulling="2025-12-13 07:11:18.900277306 +0000 UTC m=+987.397456353" observedRunningTime="2025-12-13 07:11:19.415558093 +0000 UTC m=+987.912737140" watchObservedRunningTime="2025-12-13 07:11:19.419527422 +0000 UTC m=+987.916706469" Dec 13 07:11:19 crc kubenswrapper[4707]: I1213 07:11:19.752623 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d3047c2-079d-4e46-87df-3c6700204b8a" path="/var/lib/kubelet/pods/3d3047c2-079d-4e46-87df-3c6700204b8a/volumes" Dec 13 07:11:23 crc kubenswrapper[4707]: I1213 07:11:23.281005 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:23 crc kubenswrapper[4707]: I1213 07:11:23.283188 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:23 crc kubenswrapper[4707]: I1213 07:11:23.324234 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:27 crc kubenswrapper[4707]: I1213 07:11:27.471167 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:27 crc kubenswrapper[4707]: I1213 07:11:27.471557 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:27 crc kubenswrapper[4707]: I1213 07:11:27.533363 4707 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:28 crc kubenswrapper[4707]: I1213 07:11:28.474746 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:28 crc kubenswrapper[4707]: I1213 07:11:28.764443 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qrjm6"] Dec 13 07:11:30 crc kubenswrapper[4707]: I1213 07:11:30.448171 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qrjm6" podUID="53cefb4c-c093-4bdc-86c3-4f37db5da1ed" containerName="registry-server" containerID="cri-o://c491371b991e6c84e438b50417ee1c078b9447e1753f03fe07559f933417aa80" gracePeriod=2 Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.340811 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.454058 4707 generic.go:334] "Generic (PLEG): container finished" podID="53cefb4c-c093-4bdc-86c3-4f37db5da1ed" containerID="c491371b991e6c84e438b50417ee1c078b9447e1753f03fe07559f933417aa80" exitCode=0 Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.454105 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrjm6" event={"ID":"53cefb4c-c093-4bdc-86c3-4f37db5da1ed","Type":"ContainerDied","Data":"c491371b991e6c84e438b50417ee1c078b9447e1753f03fe07559f933417aa80"} Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.454133 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qrjm6" event={"ID":"53cefb4c-c093-4bdc-86c3-4f37db5da1ed","Type":"ContainerDied","Data":"724edb17f42641df96f75c76ece8b8e45d0c5f2485f0dbbed843fa4d9d12f837"} Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.454151 4707 scope.go:117] "RemoveContainer" containerID="c491371b991e6c84e438b50417ee1c078b9447e1753f03fe07559f933417aa80" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.454152 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qrjm6" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.470674 4707 scope.go:117] "RemoveContainer" containerID="b6bd313a935fd9a13752194a7ff010c79275ee0fdfc46cfcc4d2678fcd0e5725" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.486024 4707 scope.go:117] "RemoveContainer" containerID="71ca4d6924fb7f87eecf4c2fb869a15987c843e2f18b03f529a42f05504b32d6" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.511054 4707 scope.go:117] "RemoveContainer" containerID="c491371b991e6c84e438b50417ee1c078b9447e1753f03fe07559f933417aa80" Dec 13 07:11:31 crc kubenswrapper[4707]: E1213 07:11:31.511519 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c491371b991e6c84e438b50417ee1c078b9447e1753f03fe07559f933417aa80\": container with ID starting with c491371b991e6c84e438b50417ee1c078b9447e1753f03fe07559f933417aa80 not found: ID does not exist" containerID="c491371b991e6c84e438b50417ee1c078b9447e1753f03fe07559f933417aa80" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.511559 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c491371b991e6c84e438b50417ee1c078b9447e1753f03fe07559f933417aa80"} err="failed to get container status \"c491371b991e6c84e438b50417ee1c078b9447e1753f03fe07559f933417aa80\": rpc error: code = NotFound desc = could not find container \"c491371b991e6c84e438b50417ee1c078b9447e1753f03fe07559f933417aa80\": container with ID starting with c491371b991e6c84e438b50417ee1c078b9447e1753f03fe07559f933417aa80 not found: ID does not exist" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.511585 4707 scope.go:117] "RemoveContainer" containerID="b6bd313a935fd9a13752194a7ff010c79275ee0fdfc46cfcc4d2678fcd0e5725" Dec 13 07:11:31 crc kubenswrapper[4707]: E1213 07:11:31.511824 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6bd313a935fd9a13752194a7ff010c79275ee0fdfc46cfcc4d2678fcd0e5725\": container with ID starting with b6bd313a935fd9a13752194a7ff010c79275ee0fdfc46cfcc4d2678fcd0e5725 not found: ID does not exist" containerID="b6bd313a935fd9a13752194a7ff010c79275ee0fdfc46cfcc4d2678fcd0e5725" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.511860 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6bd313a935fd9a13752194a7ff010c79275ee0fdfc46cfcc4d2678fcd0e5725"} err="failed to get container status \"b6bd313a935fd9a13752194a7ff010c79275ee0fdfc46cfcc4d2678fcd0e5725\": rpc error: code = NotFound desc = could not find container \"b6bd313a935fd9a13752194a7ff010c79275ee0fdfc46cfcc4d2678fcd0e5725\": container with ID starting with b6bd313a935fd9a13752194a7ff010c79275ee0fdfc46cfcc4d2678fcd0e5725 not found: ID does not exist" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.511879 4707 scope.go:117] "RemoveContainer" containerID="71ca4d6924fb7f87eecf4c2fb869a15987c843e2f18b03f529a42f05504b32d6" Dec 13 07:11:31 crc kubenswrapper[4707]: E1213 07:11:31.512145 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71ca4d6924fb7f87eecf4c2fb869a15987c843e2f18b03f529a42f05504b32d6\": container with ID starting with 71ca4d6924fb7f87eecf4c2fb869a15987c843e2f18b03f529a42f05504b32d6 not found: ID does not exist" containerID="71ca4d6924fb7f87eecf4c2fb869a15987c843e2f18b03f529a42f05504b32d6" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.512180 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71ca4d6924fb7f87eecf4c2fb869a15987c843e2f18b03f529a42f05504b32d6"} err="failed to get container status \"71ca4d6924fb7f87eecf4c2fb869a15987c843e2f18b03f529a42f05504b32d6\": rpc error: code = NotFound desc = could not find container \"71ca4d6924fb7f87eecf4c2fb869a15987c843e2f18b03f529a42f05504b32d6\": container with ID starting with 71ca4d6924fb7f87eecf4c2fb869a15987c843e2f18b03f529a42f05504b32d6 not found: ID does not exist" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.524572 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-utilities\") pod \"53cefb4c-c093-4bdc-86c3-4f37db5da1ed\" (UID: \"53cefb4c-c093-4bdc-86c3-4f37db5da1ed\") " Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.524658 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-catalog-content\") pod \"53cefb4c-c093-4bdc-86c3-4f37db5da1ed\" (UID: \"53cefb4c-c093-4bdc-86c3-4f37db5da1ed\") " Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.524724 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6wdj\" (UniqueName: \"kubernetes.io/projected/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-kube-api-access-m6wdj\") pod \"53cefb4c-c093-4bdc-86c3-4f37db5da1ed\" (UID: \"53cefb4c-c093-4bdc-86c3-4f37db5da1ed\") " Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.525659 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-utilities" (OuterVolumeSpecName: "utilities") pod "53cefb4c-c093-4bdc-86c3-4f37db5da1ed" (UID: "53cefb4c-c093-4bdc-86c3-4f37db5da1ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.529143 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-kube-api-access-m6wdj" (OuterVolumeSpecName: "kube-api-access-m6wdj") pod "53cefb4c-c093-4bdc-86c3-4f37db5da1ed" (UID: "53cefb4c-c093-4bdc-86c3-4f37db5da1ed"). InnerVolumeSpecName "kube-api-access-m6wdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.576770 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53cefb4c-c093-4bdc-86c3-4f37db5da1ed" (UID: "53cefb4c-c093-4bdc-86c3-4f37db5da1ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.625668 4707 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.625708 4707 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.625719 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6wdj\" (UniqueName: \"kubernetes.io/projected/53cefb4c-c093-4bdc-86c3-4f37db5da1ed-kube-api-access-m6wdj\") on node \"crc\" DevicePath \"\"" Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.802030 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qrjm6"] Dec 13 07:11:31 crc kubenswrapper[4707]: I1213 07:11:31.809648 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qrjm6"] Dec 13 07:11:33 crc kubenswrapper[4707]: I1213 07:11:33.331248 4707 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:33 crc kubenswrapper[4707]: I1213 07:11:33.752823 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53cefb4c-c093-4bdc-86c3-4f37db5da1ed" path="/var/lib/kubelet/pods/53cefb4c-c093-4bdc-86c3-4f37db5da1ed/volumes" Dec 13 07:11:34 crc kubenswrapper[4707]: I1213 07:11:34.165711 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6k6m"] Dec 13 07:11:34 crc kubenswrapper[4707]: I1213 07:11:34.166025 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c6k6m" podUID="63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a" containerName="registry-server" containerID="cri-o://962202e656f69a111c6508365131a82c6495030dfce10ae12ca8c24a596c9ea6" gracePeriod=2 Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.389870 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.480373 4707 generic.go:334] "Generic (PLEG): container finished" podID="63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a" containerID="962202e656f69a111c6508365131a82c6495030dfce10ae12ca8c24a596c9ea6" exitCode=0 Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.480453 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c6k6m" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.480430 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6k6m" event={"ID":"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a","Type":"ContainerDied","Data":"962202e656f69a111c6508365131a82c6495030dfce10ae12ca8c24a596c9ea6"} Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.480612 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6k6m" event={"ID":"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a","Type":"ContainerDied","Data":"943ccf2e6571163b537550bd6106fc51c26e775f0b67c831c900b99ef74bc5f7"} Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.480651 4707 scope.go:117] "RemoveContainer" containerID="962202e656f69a111c6508365131a82c6495030dfce10ae12ca8c24a596c9ea6" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.487982 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-catalog-content\") pod \"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a\" (UID: \"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a\") " Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.488055 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mxhd\" (UniqueName: \"kubernetes.io/projected/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-kube-api-access-4mxhd\") pod \"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a\" (UID: \"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a\") " Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.488109 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-utilities\") pod \"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a\" (UID: \"63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a\") " Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.489239 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-utilities" (OuterVolumeSpecName: "utilities") pod "63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a" (UID: "63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.495168 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-kube-api-access-4mxhd" (OuterVolumeSpecName: "kube-api-access-4mxhd") pod "63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a" (UID: "63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a"). InnerVolumeSpecName "kube-api-access-4mxhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.497221 4707 scope.go:117] "RemoveContainer" containerID="57b34bae491e676cbc0340004dddf1aca3d074d993687d57c188cdff908d690c" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.525536 4707 scope.go:117] "RemoveContainer" containerID="77887145c13a2134ad9236dfe684e9c94945154c1031eda4ed90f45fce59bd42" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.541006 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a" (UID: "63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.562686 4707 scope.go:117] "RemoveContainer" containerID="962202e656f69a111c6508365131a82c6495030dfce10ae12ca8c24a596c9ea6" Dec 13 07:11:36 crc kubenswrapper[4707]: E1213 07:11:36.563253 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"962202e656f69a111c6508365131a82c6495030dfce10ae12ca8c24a596c9ea6\": container with ID starting with 962202e656f69a111c6508365131a82c6495030dfce10ae12ca8c24a596c9ea6 not found: ID does not exist" containerID="962202e656f69a111c6508365131a82c6495030dfce10ae12ca8c24a596c9ea6" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.563285 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"962202e656f69a111c6508365131a82c6495030dfce10ae12ca8c24a596c9ea6"} err="failed to get container status \"962202e656f69a111c6508365131a82c6495030dfce10ae12ca8c24a596c9ea6\": rpc error: code = NotFound desc = could not find container \"962202e656f69a111c6508365131a82c6495030dfce10ae12ca8c24a596c9ea6\": container with ID starting with 962202e656f69a111c6508365131a82c6495030dfce10ae12ca8c24a596c9ea6 not found: ID does not exist" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.563322 4707 scope.go:117] "RemoveContainer" containerID="57b34bae491e676cbc0340004dddf1aca3d074d993687d57c188cdff908d690c" Dec 13 07:11:36 crc kubenswrapper[4707]: E1213 07:11:36.565336 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57b34bae491e676cbc0340004dddf1aca3d074d993687d57c188cdff908d690c\": container with ID starting with 57b34bae491e676cbc0340004dddf1aca3d074d993687d57c188cdff908d690c not found: ID does not exist" containerID="57b34bae491e676cbc0340004dddf1aca3d074d993687d57c188cdff908d690c" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.565368 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b34bae491e676cbc0340004dddf1aca3d074d993687d57c188cdff908d690c"} err="failed to get container status \"57b34bae491e676cbc0340004dddf1aca3d074d993687d57c188cdff908d690c\": rpc error: code = NotFound desc = could not find container \"57b34bae491e676cbc0340004dddf1aca3d074d993687d57c188cdff908d690c\": container with ID starting with 57b34bae491e676cbc0340004dddf1aca3d074d993687d57c188cdff908d690c not found: ID does not exist" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.565388 4707 scope.go:117] "RemoveContainer" containerID="77887145c13a2134ad9236dfe684e9c94945154c1031eda4ed90f45fce59bd42" Dec 13 07:11:36 crc kubenswrapper[4707]: E1213 07:11:36.565855 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77887145c13a2134ad9236dfe684e9c94945154c1031eda4ed90f45fce59bd42\": container with ID starting with 77887145c13a2134ad9236dfe684e9c94945154c1031eda4ed90f45fce59bd42 not found: ID does not exist" containerID="77887145c13a2134ad9236dfe684e9c94945154c1031eda4ed90f45fce59bd42" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.565881 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77887145c13a2134ad9236dfe684e9c94945154c1031eda4ed90f45fce59bd42"} err="failed to get container status \"77887145c13a2134ad9236dfe684e9c94945154c1031eda4ed90f45fce59bd42\": rpc error: code = NotFound desc = could not find container \"77887145c13a2134ad9236dfe684e9c94945154c1031eda4ed90f45fce59bd42\": container with ID starting with 77887145c13a2134ad9236dfe684e9c94945154c1031eda4ed90f45fce59bd42 not found: ID does not exist" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.589726 4707 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.589771 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mxhd\" (UniqueName: \"kubernetes.io/projected/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-kube-api-access-4mxhd\") on node \"crc\" DevicePath \"\"" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.589785 4707 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a-utilities\") on node \"crc\" DevicePath \"\"" Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.807099 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6k6m"] Dec 13 07:11:36 crc kubenswrapper[4707]: I1213 07:11:36.810246 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6k6m"] Dec 13 07:11:37 crc kubenswrapper[4707]: I1213 07:11:37.754804 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a" path="/var/lib/kubelet/pods/63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a/volumes" Dec 13 07:11:50 crc kubenswrapper[4707]: I1213 07:11:50.542692 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rnl8c_fe022427-e3d6-4213-9d27-67555b2bce9f/control-plane-machine-set-operator/0.log" Dec 13 07:11:50 crc kubenswrapper[4707]: I1213 07:11:50.683708 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pwlxr_bb92b62b-9ab2-46d8-bd2b-cee2d1133778/machine-api-operator/0.log" Dec 13 07:11:50 crc kubenswrapper[4707]: I1213 07:11:50.692290 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pwlxr_bb92b62b-9ab2-46d8-bd2b-cee2d1133778/kube-rbac-proxy/0.log" Dec 13 07:11:53 crc kubenswrapper[4707]: I1213 07:11:53.348249 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 07:11:53 crc kubenswrapper[4707]: I1213 07:11:53.348507 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 07:12:02 crc kubenswrapper[4707]: I1213 07:12:02.202832 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-d67ww_2c36b58d-7422-4016-b37a-cbe2b73d7168/cert-manager-controller/0.log" Dec 13 07:12:02 crc kubenswrapper[4707]: I1213 07:12:02.217843 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-r6xpb_7f59dcbe-3c45-493b-8a33-53dde9b70415/cert-manager-cainjector/0.log" Dec 13 07:12:02 crc kubenswrapper[4707]: I1213 07:12:02.378073 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-x6nvs_d286b639-0e27-47ac-a85b-dbb12e6cbabe/cert-manager-webhook/0.log" Dec 13 07:12:18 crc kubenswrapper[4707]: I1213 07:12:18.703705 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6n6sc_7fab4da9-5bcc-452f-a764-4050a276c1c9/extract-utilities/0.log" Dec 13 07:12:18 crc kubenswrapper[4707]: I1213 07:12:18.837488 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6n6sc_7fab4da9-5bcc-452f-a764-4050a276c1c9/extract-content/0.log" Dec 13 07:12:18 crc kubenswrapper[4707]: I1213 07:12:18.887797 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6n6sc_7fab4da9-5bcc-452f-a764-4050a276c1c9/extract-utilities/0.log" Dec 13 07:12:18 crc kubenswrapper[4707]: I1213 07:12:18.922766 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6n6sc_7fab4da9-5bcc-452f-a764-4050a276c1c9/extract-content/0.log" Dec 13 07:12:19 crc kubenswrapper[4707]: I1213 07:12:19.073158 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6n6sc_7fab4da9-5bcc-452f-a764-4050a276c1c9/extract-utilities/0.log" Dec 13 07:12:19 crc kubenswrapper[4707]: I1213 07:12:19.096414 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6n6sc_7fab4da9-5bcc-452f-a764-4050a276c1c9/extract-content/0.log" Dec 13 07:12:19 crc kubenswrapper[4707]: I1213 07:12:19.285564 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6n6sc_7fab4da9-5bcc-452f-a764-4050a276c1c9/registry-server/0.log" Dec 13 07:12:19 crc kubenswrapper[4707]: I1213 07:12:19.320267 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wr87k_7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c/extract-utilities/0.log" Dec 13 07:12:19 crc kubenswrapper[4707]: I1213 07:12:19.513767 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wr87k_7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c/extract-content/0.log" Dec 13 07:12:19 crc kubenswrapper[4707]: I1213 07:12:19.537764 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wr87k_7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c/extract-utilities/0.log" Dec 13 07:12:19 crc kubenswrapper[4707]: I1213 07:12:19.544209 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wr87k_7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c/extract-content/0.log" Dec 13 07:12:19 crc kubenswrapper[4707]: I1213 07:12:19.762863 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wr87k_7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c/extract-content/0.log" Dec 13 07:12:19 crc kubenswrapper[4707]: I1213 07:12:19.781075 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wr87k_7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c/extract-utilities/0.log" Dec 13 07:12:19 crc kubenswrapper[4707]: I1213 07:12:19.857820 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wr87k_7baec2ab-aeee-4166-9fd6-8a3b92a3aa5c/registry-server/0.log" Dec 13 07:12:19 crc kubenswrapper[4707]: I1213 07:12:19.964780 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-99vtt_3d15d15c-6c4f-4e32-bbaf-caebdb10506c/marketplace-operator/0.log" Dec 13 07:12:20 crc kubenswrapper[4707]: I1213 07:12:20.020447 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5tfbq_11257d1f-9513-4baf-a682-819b89ebe8eb/extract-utilities/0.log" Dec 13 07:12:20 crc kubenswrapper[4707]: I1213 07:12:20.155012 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5tfbq_11257d1f-9513-4baf-a682-819b89ebe8eb/extract-content/0.log" Dec 13 07:12:20 crc kubenswrapper[4707]: I1213 07:12:20.192036 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5tfbq_11257d1f-9513-4baf-a682-819b89ebe8eb/extract-content/0.log" Dec 13 07:12:20 crc kubenswrapper[4707]: I1213 07:12:20.209877 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5tfbq_11257d1f-9513-4baf-a682-819b89ebe8eb/extract-utilities/0.log" Dec 13 07:12:20 crc kubenswrapper[4707]: I1213 07:12:20.397334 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5tfbq_11257d1f-9513-4baf-a682-819b89ebe8eb/extract-content/0.log" Dec 13 07:12:20 crc kubenswrapper[4707]: I1213 07:12:20.413416 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5tfbq_11257d1f-9513-4baf-a682-819b89ebe8eb/registry-server/0.log" Dec 13 07:12:20 crc kubenswrapper[4707]: I1213 07:12:20.445160 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5tfbq_11257d1f-9513-4baf-a682-819b89ebe8eb/extract-utilities/0.log" Dec 13 07:12:20 crc kubenswrapper[4707]: I1213 07:12:20.583344 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-thb7x_b2bdec10-6902-496b-b5a3-12fa77db4bf2/extract-utilities/0.log" Dec 13 07:12:20 crc kubenswrapper[4707]: I1213 07:12:20.700657 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-thb7x_b2bdec10-6902-496b-b5a3-12fa77db4bf2/extract-content/0.log" Dec 13 07:12:20 crc kubenswrapper[4707]: I1213 07:12:20.719301 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-thb7x_b2bdec10-6902-496b-b5a3-12fa77db4bf2/extract-utilities/0.log" Dec 13 07:12:20 crc kubenswrapper[4707]: I1213 07:12:20.767200 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-thb7x_b2bdec10-6902-496b-b5a3-12fa77db4bf2/extract-content/0.log" Dec 13 07:12:20 crc kubenswrapper[4707]: I1213 07:12:20.900288 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-thb7x_b2bdec10-6902-496b-b5a3-12fa77db4bf2/extract-content/0.log" Dec 13 07:12:20 crc kubenswrapper[4707]: I1213 07:12:20.925230 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-thb7x_b2bdec10-6902-496b-b5a3-12fa77db4bf2/extract-utilities/0.log" Dec 13 07:12:21 crc kubenswrapper[4707]: I1213 07:12:21.039131 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-thb7x_b2bdec10-6902-496b-b5a3-12fa77db4bf2/registry-server/0.log" Dec 13 07:12:23 crc kubenswrapper[4707]: I1213 07:12:23.347543 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 07:12:23 crc kubenswrapper[4707]: I1213 07:12:23.347630 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 07:12:53 crc kubenswrapper[4707]: I1213 07:12:53.348181 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 07:12:53 crc kubenswrapper[4707]: I1213 07:12:53.349155 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 07:12:53 crc kubenswrapper[4707]: I1213 07:12:53.349227 4707 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 07:12:53 crc kubenswrapper[4707]: I1213 07:12:53.350140 4707 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe978c3da49e49b18f17beb4fbb2341d90728ac6f7434b6a66e7d821f3fdc208"} pod="openshift-machine-config-operator/machine-config-daemon-ns45v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 07:12:53 crc kubenswrapper[4707]: I1213 07:12:53.350899 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" containerID="cri-o://fe978c3da49e49b18f17beb4fbb2341d90728ac6f7434b6a66e7d821f3fdc208" gracePeriod=600 Dec 13 07:12:54 crc kubenswrapper[4707]: I1213 07:12:54.033636 4707 generic.go:334] "Generic (PLEG): container finished" podID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerID="fe978c3da49e49b18f17beb4fbb2341d90728ac6f7434b6a66e7d821f3fdc208" exitCode=0 Dec 13 07:12:54 crc kubenswrapper[4707]: I1213 07:12:54.033682 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" event={"ID":"5d5a779a-7592-4da7-950b-3a0679d7a4bf","Type":"ContainerDied","Data":"fe978c3da49e49b18f17beb4fbb2341d90728ac6f7434b6a66e7d821f3fdc208"} Dec 13 07:12:54 crc kubenswrapper[4707]: I1213 07:12:54.034000 4707 scope.go:117] "RemoveContainer" containerID="3b0f48925884135b2d976248bf9fa53c361bfaa4e83eae2a7f0b3a53884319c7" Dec 13 07:12:55 crc kubenswrapper[4707]: I1213 07:12:55.044692 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" event={"ID":"5d5a779a-7592-4da7-950b-3a0679d7a4bf","Type":"ContainerStarted","Data":"9ec2e5856eb33b36ab79c7f862abbb635f8f12b6cd356b9cdd531e5f7ad214ba"} Dec 13 07:13:23 crc kubenswrapper[4707]: I1213 07:13:23.191075 4707 generic.go:334] "Generic (PLEG): container finished" podID="9f643d5c-42de-4c94-a3c8-d5daeda3363c" containerID="011f3a125f875c1620121f868bdab8eb364737cfd144bc5b1b7af20fee0f68f1" exitCode=0 Dec 13 07:13:23 crc kubenswrapper[4707]: I1213 07:13:23.191136 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jjzdd/must-gather-wzvvr" event={"ID":"9f643d5c-42de-4c94-a3c8-d5daeda3363c","Type":"ContainerDied","Data":"011f3a125f875c1620121f868bdab8eb364737cfd144bc5b1b7af20fee0f68f1"} Dec 13 07:13:23 crc kubenswrapper[4707]: I1213 07:13:23.192129 4707 scope.go:117] "RemoveContainer" containerID="011f3a125f875c1620121f868bdab8eb364737cfd144bc5b1b7af20fee0f68f1" Dec 13 07:13:23 crc kubenswrapper[4707]: I1213 07:13:23.432179 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jjzdd_must-gather-wzvvr_9f643d5c-42de-4c94-a3c8-d5daeda3363c/gather/0.log" Dec 13 07:13:30 crc kubenswrapper[4707]: I1213 07:13:30.146637 4707 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jjzdd/must-gather-wzvvr"] Dec 13 07:13:30 crc kubenswrapper[4707]: I1213 07:13:30.147494 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jjzdd/must-gather-wzvvr" podUID="9f643d5c-42de-4c94-a3c8-d5daeda3363c" containerName="copy" containerID="cri-o://461f0e5cde77d4005d93170128958f35b5b542d3203bcfc68f23fa712f293ce3" gracePeriod=2 Dec 13 07:13:30 crc kubenswrapper[4707]: I1213 07:13:30.150233 4707 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jjzdd/must-gather-wzvvr"] Dec 13 07:13:30 crc kubenswrapper[4707]: I1213 07:13:30.552644 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jjzdd_must-gather-wzvvr_9f643d5c-42de-4c94-a3c8-d5daeda3363c/copy/0.log" Dec 13 07:13:30 crc kubenswrapper[4707]: I1213 07:13:30.553671 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jjzdd/must-gather-wzvvr" Dec 13 07:13:30 crc kubenswrapper[4707]: I1213 07:13:30.682280 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96z6f\" (UniqueName: \"kubernetes.io/projected/9f643d5c-42de-4c94-a3c8-d5daeda3363c-kube-api-access-96z6f\") pod \"9f643d5c-42de-4c94-a3c8-d5daeda3363c\" (UID: \"9f643d5c-42de-4c94-a3c8-d5daeda3363c\") " Dec 13 07:13:30 crc kubenswrapper[4707]: I1213 07:13:30.682341 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9f643d5c-42de-4c94-a3c8-d5daeda3363c-must-gather-output\") pod \"9f643d5c-42de-4c94-a3c8-d5daeda3363c\" (UID: \"9f643d5c-42de-4c94-a3c8-d5daeda3363c\") " Dec 13 07:13:30 crc kubenswrapper[4707]: I1213 07:13:30.688532 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f643d5c-42de-4c94-a3c8-d5daeda3363c-kube-api-access-96z6f" (OuterVolumeSpecName: "kube-api-access-96z6f") pod "9f643d5c-42de-4c94-a3c8-d5daeda3363c" (UID: "9f643d5c-42de-4c94-a3c8-d5daeda3363c"). InnerVolumeSpecName "kube-api-access-96z6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:13:30 crc kubenswrapper[4707]: I1213 07:13:30.729195 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f643d5c-42de-4c94-a3c8-d5daeda3363c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9f643d5c-42de-4c94-a3c8-d5daeda3363c" (UID: "9f643d5c-42de-4c94-a3c8-d5daeda3363c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 13 07:13:30 crc kubenswrapper[4707]: I1213 07:13:30.783905 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96z6f\" (UniqueName: \"kubernetes.io/projected/9f643d5c-42de-4c94-a3c8-d5daeda3363c-kube-api-access-96z6f\") on node \"crc\" DevicePath \"\"" Dec 13 07:13:30 crc kubenswrapper[4707]: I1213 07:13:30.783941 4707 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9f643d5c-42de-4c94-a3c8-d5daeda3363c-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 13 07:13:31 crc kubenswrapper[4707]: I1213 07:13:31.240185 4707 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jjzdd_must-gather-wzvvr_9f643d5c-42de-4c94-a3c8-d5daeda3363c/copy/0.log" Dec 13 07:13:31 crc kubenswrapper[4707]: I1213 07:13:31.241352 4707 generic.go:334] "Generic (PLEG): container finished" podID="9f643d5c-42de-4c94-a3c8-d5daeda3363c" containerID="461f0e5cde77d4005d93170128958f35b5b542d3203bcfc68f23fa712f293ce3" exitCode=143 Dec 13 07:13:31 crc kubenswrapper[4707]: I1213 07:13:31.241423 4707 scope.go:117] "RemoveContainer" containerID="461f0e5cde77d4005d93170128958f35b5b542d3203bcfc68f23fa712f293ce3" Dec 13 07:13:31 crc kubenswrapper[4707]: I1213 07:13:31.241439 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jjzdd/must-gather-wzvvr" Dec 13 07:13:31 crc kubenswrapper[4707]: I1213 07:13:31.260524 4707 scope.go:117] "RemoveContainer" containerID="011f3a125f875c1620121f868bdab8eb364737cfd144bc5b1b7af20fee0f68f1" Dec 13 07:13:31 crc kubenswrapper[4707]: I1213 07:13:31.300407 4707 scope.go:117] "RemoveContainer" containerID="461f0e5cde77d4005d93170128958f35b5b542d3203bcfc68f23fa712f293ce3" Dec 13 07:13:31 crc kubenswrapper[4707]: E1213 07:13:31.301000 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"461f0e5cde77d4005d93170128958f35b5b542d3203bcfc68f23fa712f293ce3\": container with ID starting with 461f0e5cde77d4005d93170128958f35b5b542d3203bcfc68f23fa712f293ce3 not found: ID does not exist" containerID="461f0e5cde77d4005d93170128958f35b5b542d3203bcfc68f23fa712f293ce3" Dec 13 07:13:31 crc kubenswrapper[4707]: I1213 07:13:31.301080 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"461f0e5cde77d4005d93170128958f35b5b542d3203bcfc68f23fa712f293ce3"} err="failed to get container status \"461f0e5cde77d4005d93170128958f35b5b542d3203bcfc68f23fa712f293ce3\": rpc error: code = NotFound desc = could not find container \"461f0e5cde77d4005d93170128958f35b5b542d3203bcfc68f23fa712f293ce3\": container with ID starting with 461f0e5cde77d4005d93170128958f35b5b542d3203bcfc68f23fa712f293ce3 not found: ID does not exist" Dec 13 07:13:31 crc kubenswrapper[4707]: I1213 07:13:31.301117 4707 scope.go:117] "RemoveContainer" containerID="011f3a125f875c1620121f868bdab8eb364737cfd144bc5b1b7af20fee0f68f1" Dec 13 07:13:31 crc kubenswrapper[4707]: E1213 07:13:31.301586 4707 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"011f3a125f875c1620121f868bdab8eb364737cfd144bc5b1b7af20fee0f68f1\": container with ID starting with 011f3a125f875c1620121f868bdab8eb364737cfd144bc5b1b7af20fee0f68f1 not found: ID does not exist" containerID="011f3a125f875c1620121f868bdab8eb364737cfd144bc5b1b7af20fee0f68f1" Dec 13 07:13:31 crc kubenswrapper[4707]: I1213 07:13:31.301624 4707 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"011f3a125f875c1620121f868bdab8eb364737cfd144bc5b1b7af20fee0f68f1"} err="failed to get container status \"011f3a125f875c1620121f868bdab8eb364737cfd144bc5b1b7af20fee0f68f1\": rpc error: code = NotFound desc = could not find container \"011f3a125f875c1620121f868bdab8eb364737cfd144bc5b1b7af20fee0f68f1\": container with ID starting with 011f3a125f875c1620121f868bdab8eb364737cfd144bc5b1b7af20fee0f68f1 not found: ID does not exist" Dec 13 07:13:31 crc kubenswrapper[4707]: I1213 07:13:31.753282 4707 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f643d5c-42de-4c94-a3c8-d5daeda3363c" path="/var/lib/kubelet/pods/9f643d5c-42de-4c94-a3c8-d5daeda3363c/volumes" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.181650 4707 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr"] Dec 13 07:15:00 crc kubenswrapper[4707]: E1213 07:15:00.182561 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ff56e7-18ac-4dec-8d72-5d0d53a06805" containerName="extract-content" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.182599 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ff56e7-18ac-4dec-8d72-5d0d53a06805" containerName="extract-content" Dec 13 07:15:00 crc kubenswrapper[4707]: E1213 07:15:00.182619 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f643d5c-42de-4c94-a3c8-d5daeda3363c" containerName="copy" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.182628 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f643d5c-42de-4c94-a3c8-d5daeda3363c" containerName="copy" Dec 13 07:15:00 crc kubenswrapper[4707]: E1213 07:15:00.182641 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53cefb4c-c093-4bdc-86c3-4f37db5da1ed" containerName="registry-server" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.182671 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="53cefb4c-c093-4bdc-86c3-4f37db5da1ed" containerName="registry-server" Dec 13 07:15:00 crc kubenswrapper[4707]: E1213 07:15:00.182685 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a" containerName="registry-server" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.182694 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a" containerName="registry-server" Dec 13 07:15:00 crc kubenswrapper[4707]: E1213 07:15:00.182708 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ff56e7-18ac-4dec-8d72-5d0d53a06805" containerName="registry-server" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.182716 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ff56e7-18ac-4dec-8d72-5d0d53a06805" containerName="registry-server" Dec 13 07:15:00 crc kubenswrapper[4707]: E1213 07:15:00.182747 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3047c2-079d-4e46-87df-3c6700204b8a" containerName="extract-content" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.182755 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3047c2-079d-4e46-87df-3c6700204b8a" containerName="extract-content" Dec 13 07:15:00 crc kubenswrapper[4707]: E1213 07:15:00.182770 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53cefb4c-c093-4bdc-86c3-4f37db5da1ed" containerName="extract-content" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.182778 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="53cefb4c-c093-4bdc-86c3-4f37db5da1ed" containerName="extract-content" Dec 13 07:15:00 crc kubenswrapper[4707]: E1213 07:15:00.182790 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3047c2-079d-4e46-87df-3c6700204b8a" containerName="extract-utilities" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.182798 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3047c2-079d-4e46-87df-3c6700204b8a" containerName="extract-utilities" Dec 13 07:15:00 crc kubenswrapper[4707]: E1213 07:15:00.182829 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f643d5c-42de-4c94-a3c8-d5daeda3363c" containerName="gather" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.182838 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f643d5c-42de-4c94-a3c8-d5daeda3363c" containerName="gather" Dec 13 07:15:00 crc kubenswrapper[4707]: E1213 07:15:00.182847 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3047c2-079d-4e46-87df-3c6700204b8a" containerName="registry-server" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.182855 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3047c2-079d-4e46-87df-3c6700204b8a" containerName="registry-server" Dec 13 07:15:00 crc kubenswrapper[4707]: E1213 07:15:00.182866 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a" containerName="extract-utilities" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.182875 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a" containerName="extract-utilities" Dec 13 07:15:00 crc kubenswrapper[4707]: E1213 07:15:00.182885 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53cefb4c-c093-4bdc-86c3-4f37db5da1ed" containerName="extract-utilities" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.182912 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="53cefb4c-c093-4bdc-86c3-4f37db5da1ed" containerName="extract-utilities" Dec 13 07:15:00 crc kubenswrapper[4707]: E1213 07:15:00.182924 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ff56e7-18ac-4dec-8d72-5d0d53a06805" containerName="extract-utilities" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.182932 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ff56e7-18ac-4dec-8d72-5d0d53a06805" containerName="extract-utilities" Dec 13 07:15:00 crc kubenswrapper[4707]: E1213 07:15:00.182944 4707 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a" containerName="extract-content" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.183024 4707 state_mem.go:107] "Deleted CPUSet assignment" podUID="63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a" containerName="extract-content" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.183190 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="53cefb4c-c093-4bdc-86c3-4f37db5da1ed" containerName="registry-server" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.183210 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ff56e7-18ac-4dec-8d72-5d0d53a06805" containerName="registry-server" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.183226 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3047c2-079d-4e46-87df-3c6700204b8a" containerName="registry-server" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.183270 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="63b9b342-5f7d-4bfc-85e2-73b5fc64ca1a" containerName="registry-server" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.183289 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f643d5c-42de-4c94-a3c8-d5daeda3363c" containerName="gather" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.183303 4707 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f643d5c-42de-4c94-a3c8-d5daeda3363c" containerName="copy" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.183926 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.186140 4707 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.186280 4707 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.193672 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr"] Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.197447 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b8c7ebc-ae2d-4346-83e6-679950c71de2-secret-volume\") pod \"collect-profiles-29426835-jm8rr\" (UID: \"4b8c7ebc-ae2d-4346-83e6-679950c71de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.197493 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b8c7ebc-ae2d-4346-83e6-679950c71de2-config-volume\") pod \"collect-profiles-29426835-jm8rr\" (UID: \"4b8c7ebc-ae2d-4346-83e6-679950c71de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.197544 4707 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mh2l\" (UniqueName: \"kubernetes.io/projected/4b8c7ebc-ae2d-4346-83e6-679950c71de2-kube-api-access-6mh2l\") pod \"collect-profiles-29426835-jm8rr\" (UID: \"4b8c7ebc-ae2d-4346-83e6-679950c71de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.299065 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b8c7ebc-ae2d-4346-83e6-679950c71de2-secret-volume\") pod \"collect-profiles-29426835-jm8rr\" (UID: \"4b8c7ebc-ae2d-4346-83e6-679950c71de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.299127 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b8c7ebc-ae2d-4346-83e6-679950c71de2-config-volume\") pod \"collect-profiles-29426835-jm8rr\" (UID: \"4b8c7ebc-ae2d-4346-83e6-679950c71de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.299531 4707 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mh2l\" (UniqueName: \"kubernetes.io/projected/4b8c7ebc-ae2d-4346-83e6-679950c71de2-kube-api-access-6mh2l\") pod \"collect-profiles-29426835-jm8rr\" (UID: \"4b8c7ebc-ae2d-4346-83e6-679950c71de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.301638 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b8c7ebc-ae2d-4346-83e6-679950c71de2-config-volume\") pod \"collect-profiles-29426835-jm8rr\" (UID: \"4b8c7ebc-ae2d-4346-83e6-679950c71de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.309941 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b8c7ebc-ae2d-4346-83e6-679950c71de2-secret-volume\") pod \"collect-profiles-29426835-jm8rr\" (UID: \"4b8c7ebc-ae2d-4346-83e6-679950c71de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.317247 4707 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mh2l\" (UniqueName: \"kubernetes.io/projected/4b8c7ebc-ae2d-4346-83e6-679950c71de2-kube-api-access-6mh2l\") pod \"collect-profiles-29426835-jm8rr\" (UID: \"4b8c7ebc-ae2d-4346-83e6-679950c71de2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.503974 4707 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr" Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.710811 4707 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr"] Dec 13 07:15:00 crc kubenswrapper[4707]: I1213 07:15:00.771452 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr" event={"ID":"4b8c7ebc-ae2d-4346-83e6-679950c71de2","Type":"ContainerStarted","Data":"82b6f0ec1d895301440df9c0e3f575860716ae87fbb472a9c734caadc8988f1d"} Dec 13 07:15:01 crc kubenswrapper[4707]: I1213 07:15:01.778056 4707 generic.go:334] "Generic (PLEG): container finished" podID="4b8c7ebc-ae2d-4346-83e6-679950c71de2" containerID="c89b2abd6e74f37b06ff55ca345365084a27388ddbd4da44e56f505dae7f8de9" exitCode=0 Dec 13 07:15:01 crc kubenswrapper[4707]: I1213 07:15:01.778102 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr" event={"ID":"4b8c7ebc-ae2d-4346-83e6-679950c71de2","Type":"ContainerDied","Data":"c89b2abd6e74f37b06ff55ca345365084a27388ddbd4da44e56f505dae7f8de9"} Dec 13 07:15:03 crc kubenswrapper[4707]: I1213 07:15:03.029226 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr" Dec 13 07:15:03 crc kubenswrapper[4707]: I1213 07:15:03.137999 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mh2l\" (UniqueName: \"kubernetes.io/projected/4b8c7ebc-ae2d-4346-83e6-679950c71de2-kube-api-access-6mh2l\") pod \"4b8c7ebc-ae2d-4346-83e6-679950c71de2\" (UID: \"4b8c7ebc-ae2d-4346-83e6-679950c71de2\") " Dec 13 07:15:03 crc kubenswrapper[4707]: I1213 07:15:03.138102 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b8c7ebc-ae2d-4346-83e6-679950c71de2-config-volume\") pod \"4b8c7ebc-ae2d-4346-83e6-679950c71de2\" (UID: \"4b8c7ebc-ae2d-4346-83e6-679950c71de2\") " Dec 13 07:15:03 crc kubenswrapper[4707]: I1213 07:15:03.138146 4707 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b8c7ebc-ae2d-4346-83e6-679950c71de2-secret-volume\") pod \"4b8c7ebc-ae2d-4346-83e6-679950c71de2\" (UID: \"4b8c7ebc-ae2d-4346-83e6-679950c71de2\") " Dec 13 07:15:03 crc kubenswrapper[4707]: I1213 07:15:03.139345 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b8c7ebc-ae2d-4346-83e6-679950c71de2-config-volume" (OuterVolumeSpecName: "config-volume") pod "4b8c7ebc-ae2d-4346-83e6-679950c71de2" (UID: "4b8c7ebc-ae2d-4346-83e6-679950c71de2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 07:15:03 crc kubenswrapper[4707]: I1213 07:15:03.144558 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b8c7ebc-ae2d-4346-83e6-679950c71de2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4b8c7ebc-ae2d-4346-83e6-679950c71de2" (UID: "4b8c7ebc-ae2d-4346-83e6-679950c71de2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 07:15:03 crc kubenswrapper[4707]: I1213 07:15:03.144790 4707 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b8c7ebc-ae2d-4346-83e6-679950c71de2-kube-api-access-6mh2l" (OuterVolumeSpecName: "kube-api-access-6mh2l") pod "4b8c7ebc-ae2d-4346-83e6-679950c71de2" (UID: "4b8c7ebc-ae2d-4346-83e6-679950c71de2"). InnerVolumeSpecName "kube-api-access-6mh2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 07:15:03 crc kubenswrapper[4707]: I1213 07:15:03.240497 4707 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mh2l\" (UniqueName: \"kubernetes.io/projected/4b8c7ebc-ae2d-4346-83e6-679950c71de2-kube-api-access-6mh2l\") on node \"crc\" DevicePath \"\"" Dec 13 07:15:03 crc kubenswrapper[4707]: I1213 07:15:03.240550 4707 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b8c7ebc-ae2d-4346-83e6-679950c71de2-config-volume\") on node \"crc\" DevicePath \"\"" Dec 13 07:15:03 crc kubenswrapper[4707]: I1213 07:15:03.240578 4707 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4b8c7ebc-ae2d-4346-83e6-679950c71de2-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 13 07:15:03 crc kubenswrapper[4707]: I1213 07:15:03.792782 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr" event={"ID":"4b8c7ebc-ae2d-4346-83e6-679950c71de2","Type":"ContainerDied","Data":"82b6f0ec1d895301440df9c0e3f575860716ae87fbb472a9c734caadc8988f1d"} Dec 13 07:15:03 crc kubenswrapper[4707]: I1213 07:15:03.793113 4707 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82b6f0ec1d895301440df9c0e3f575860716ae87fbb472a9c734caadc8988f1d" Dec 13 07:15:03 crc kubenswrapper[4707]: I1213 07:15:03.793182 4707 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29426835-jm8rr" Dec 13 07:15:23 crc kubenswrapper[4707]: I1213 07:15:23.348254 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 07:15:23 crc kubenswrapper[4707]: I1213 07:15:23.348894 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 07:15:53 crc kubenswrapper[4707]: I1213 07:15:53.347948 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 07:15:53 crc kubenswrapper[4707]: I1213 07:15:53.348669 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 07:16:23 crc kubenswrapper[4707]: I1213 07:16:23.347281 4707 patch_prober.go:28] interesting pod/machine-config-daemon-ns45v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 13 07:16:23 crc kubenswrapper[4707]: I1213 07:16:23.347840 4707 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 13 07:16:23 crc kubenswrapper[4707]: I1213 07:16:23.347888 4707 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" Dec 13 07:16:23 crc kubenswrapper[4707]: I1213 07:16:23.348334 4707 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ec2e5856eb33b36ab79c7f862abbb635f8f12b6cd356b9cdd531e5f7ad214ba"} pod="openshift-machine-config-operator/machine-config-daemon-ns45v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 13 07:16:23 crc kubenswrapper[4707]: I1213 07:16:23.348381 4707 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" podUID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerName="machine-config-daemon" containerID="cri-o://9ec2e5856eb33b36ab79c7f862abbb635f8f12b6cd356b9cdd531e5f7ad214ba" gracePeriod=600 Dec 13 07:16:24 crc kubenswrapper[4707]: I1213 07:16:24.268037 4707 generic.go:334] "Generic (PLEG): container finished" podID="5d5a779a-7592-4da7-950b-3a0679d7a4bf" containerID="9ec2e5856eb33b36ab79c7f862abbb635f8f12b6cd356b9cdd531e5f7ad214ba" exitCode=0 Dec 13 07:16:24 crc kubenswrapper[4707]: I1213 07:16:24.268073 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" event={"ID":"5d5a779a-7592-4da7-950b-3a0679d7a4bf","Type":"ContainerDied","Data":"9ec2e5856eb33b36ab79c7f862abbb635f8f12b6cd356b9cdd531e5f7ad214ba"} Dec 13 07:16:24 crc kubenswrapper[4707]: I1213 07:16:24.268375 4707 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ns45v" event={"ID":"5d5a779a-7592-4da7-950b-3a0679d7a4bf","Type":"ContainerStarted","Data":"fbdd420571829bf23e33afe44bb4c9a4b18208d564fa453d09177e7ca5bd6a7c"} Dec 13 07:16:24 crc kubenswrapper[4707]: I1213 07:16:24.268396 4707 scope.go:117] "RemoveContainer" containerID="fe978c3da49e49b18f17beb4fbb2341d90728ac6f7434b6a66e7d821f3fdc208"